2025-05-13 22:36:41.237396 | Job console starting 2025-05-13 22:36:41.251246 | Updating git repos 2025-05-13 22:36:41.314765 | Cloning repos into workspace 2025-05-13 22:36:41.453029 | Restoring repo states 2025-05-13 22:36:41.474789 | Merging changes 2025-05-13 22:36:41.474818 | Checking out repos 2025-05-13 22:36:41.720953 | Preparing playbooks 2025-05-13 22:36:42.361827 | Running Ansible setup 2025-05-13 22:36:46.758649 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-05-13 22:36:47.508983 | 2025-05-13 22:36:47.509186 | PLAY [Base pre] 2025-05-13 22:36:47.526489 | 2025-05-13 22:36:47.526635 | TASK [Setup log path fact] 2025-05-13 22:36:47.566400 | orchestrator | ok 2025-05-13 22:36:47.587504 | 2025-05-13 22:36:47.587653 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-05-13 22:36:47.641427 | orchestrator | ok 2025-05-13 22:36:47.656159 | 2025-05-13 22:36:47.656292 | TASK [emit-job-header : Print job information] 2025-05-13 22:36:47.714672 | # Job Information 2025-05-13 22:36:47.715020 | Ansible Version: 2.16.14 2025-05-13 22:36:47.715088 | Job: testbed-deploy-in-a-nutshell-ubuntu-24.04 2025-05-13 22:36:47.715150 | Pipeline: post 2025-05-13 22:36:47.715191 | Executor: 521e9411259a 2025-05-13 22:36:47.715229 | Triggered by: https://github.com/osism/testbed/commit/1a7621aeb5c44627247e65644c47be07a179edaa 2025-05-13 22:36:47.715270 | Event ID: 6d6ba3fa-302f-11f0-941f-bfdc037dc9f7 2025-05-13 22:36:47.724455 | 2025-05-13 22:36:47.724577 | LOOP [emit-job-header : Print node information] 2025-05-13 22:36:47.849799 | orchestrator | ok: 2025-05-13 22:36:47.850154 | orchestrator | # Node Information 2025-05-13 22:36:47.850228 | orchestrator | Inventory Hostname: orchestrator 2025-05-13 22:36:47.850280 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-05-13 22:36:47.850326 | orchestrator | Username: zuul-testbed04 2025-05-13 22:36:47.850369 | orchestrator | Distro: Debian 12.10 2025-05-13 22:36:47.850418 | orchestrator | Provider: static-testbed 2025-05-13 22:36:47.850461 | orchestrator | Region: 2025-05-13 22:36:47.850504 | orchestrator | Label: testbed-orchestrator 2025-05-13 22:36:47.850546 | orchestrator | Product Name: OpenStack Nova 2025-05-13 22:36:47.850586 | orchestrator | Interface IP: 81.163.193.140 2025-05-13 22:36:47.876434 | 2025-05-13 22:36:47.876600 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-05-13 22:36:48.367626 | orchestrator -> localhost | changed 2025-05-13 22:36:48.375999 | 2025-05-13 22:36:48.376123 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-05-13 22:36:49.406935 | orchestrator -> localhost | changed 2025-05-13 22:36:49.424472 | 2025-05-13 22:36:49.424608 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-05-13 22:36:49.724843 | orchestrator -> localhost | ok 2025-05-13 22:36:49.740107 | 2025-05-13 22:36:49.740277 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-05-13 22:36:49.775459 | orchestrator | ok 2025-05-13 22:36:49.794515 | orchestrator | included: /var/lib/zuul/builds/909ac6d6933c43bb91e99e3e1a9563b8/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-05-13 22:36:49.802609 | 2025-05-13 22:36:49.802713 | TASK [add-build-sshkey : Create Temp SSH key] 2025-05-13 22:36:50.813048 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-05-13 22:36:50.813562 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/909ac6d6933c43bb91e99e3e1a9563b8/work/909ac6d6933c43bb91e99e3e1a9563b8_id_rsa 2025-05-13 22:36:50.813649 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/909ac6d6933c43bb91e99e3e1a9563b8/work/909ac6d6933c43bb91e99e3e1a9563b8_id_rsa.pub 2025-05-13 22:36:50.813704 | orchestrator -> localhost | The key fingerprint is: 2025-05-13 22:36:50.813762 | orchestrator -> localhost | SHA256:ZVbqJUqUmwobhKYygtWGuyaECKWFG1N3mQPqVswlHmo zuul-build-sshkey 2025-05-13 22:36:50.813808 | orchestrator -> localhost | The key's randomart image is: 2025-05-13 22:36:50.813868 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-05-13 22:36:50.813914 | orchestrator -> localhost | | +++=.oo.. . | 2025-05-13 22:36:50.813979 | orchestrator -> localhost | |=o=B+=+.. o | 2025-05-13 22:36:50.814023 | orchestrator -> localhost | |*BE+= ..o* . | 2025-05-13 22:36:50.814063 | orchestrator -> localhost | |X+..o .o* o | 2025-05-13 22:36:50.814104 | orchestrator -> localhost | |+.o. + .S . | 2025-05-13 22:36:50.814152 | orchestrator -> localhost | |..o . . | 2025-05-13 22:36:50.814194 | orchestrator -> localhost | | o | 2025-05-13 22:36:50.814234 | orchestrator -> localhost | | | 2025-05-13 22:36:50.814275 | orchestrator -> localhost | | | 2025-05-13 22:36:50.814315 | orchestrator -> localhost | +----[SHA256]-----+ 2025-05-13 22:36:50.814445 | orchestrator -> localhost | ok: Runtime: 0:00:00.504897 2025-05-13 22:36:50.827401 | 2025-05-13 22:36:50.827539 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-05-13 22:36:50.863008 | orchestrator | ok 2025-05-13 22:36:50.875764 | orchestrator | included: /var/lib/zuul/builds/909ac6d6933c43bb91e99e3e1a9563b8/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-05-13 22:36:50.885047 | 2025-05-13 22:36:50.885144 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-05-13 22:36:50.908608 | orchestrator | skipping: Conditional result was False 2025-05-13 22:36:50.917584 | 2025-05-13 22:36:50.917692 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-05-13 22:36:51.513429 | orchestrator | changed 2025-05-13 22:36:51.522041 | 2025-05-13 22:36:51.522170 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-05-13 22:36:51.816032 | orchestrator | ok 2025-05-13 22:36:51.822400 | 2025-05-13 22:36:51.822515 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-05-13 22:36:52.258475 | orchestrator | ok 2025-05-13 22:36:52.268162 | 2025-05-13 22:36:52.268311 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-05-13 22:36:52.712447 | orchestrator | ok 2025-05-13 22:36:52.721427 | 2025-05-13 22:36:52.721548 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-05-13 22:36:52.745722 | orchestrator | skipping: Conditional result was False 2025-05-13 22:36:52.757252 | 2025-05-13 22:36:52.757379 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-05-13 22:36:53.198457 | orchestrator -> localhost | changed 2025-05-13 22:36:53.213989 | 2025-05-13 22:36:53.214132 | TASK [add-build-sshkey : Add back temp key] 2025-05-13 22:36:53.564449 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/909ac6d6933c43bb91e99e3e1a9563b8/work/909ac6d6933c43bb91e99e3e1a9563b8_id_rsa (zuul-build-sshkey) 2025-05-13 22:36:53.564707 | orchestrator -> localhost | ok: Runtime: 0:00:00.018842 2025-05-13 22:36:53.572911 | 2025-05-13 22:36:53.573076 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-05-13 22:36:53.962550 | orchestrator | ok 2025-05-13 22:36:53.971759 | 2025-05-13 22:36:53.971923 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-05-13 22:36:54.007110 | orchestrator | skipping: Conditional result was False 2025-05-13 22:36:54.080238 | 2025-05-13 22:36:54.080381 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-05-13 22:36:54.523802 | orchestrator | ok 2025-05-13 22:36:54.538250 | 2025-05-13 22:36:54.538378 | TASK [validate-host : Define zuul_info_dir fact] 2025-05-13 22:36:54.585771 | orchestrator | ok 2025-05-13 22:36:54.596214 | 2025-05-13 22:36:54.596341 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-05-13 22:36:54.915397 | orchestrator -> localhost | ok 2025-05-13 22:36:54.928417 | 2025-05-13 22:36:54.928584 | TASK [validate-host : Collect information about the host] 2025-05-13 22:36:56.114735 | orchestrator | ok 2025-05-13 22:36:56.131840 | 2025-05-13 22:36:56.131980 | TASK [validate-host : Sanitize hostname] 2025-05-13 22:36:56.207739 | orchestrator | ok 2025-05-13 22:36:56.216563 | 2025-05-13 22:36:56.216703 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-05-13 22:36:56.828967 | orchestrator -> localhost | changed 2025-05-13 22:36:56.836369 | 2025-05-13 22:36:56.836488 | TASK [validate-host : Collect information about zuul worker] 2025-05-13 22:36:57.302398 | orchestrator | ok 2025-05-13 22:36:57.308341 | 2025-05-13 22:36:57.308459 | TASK [validate-host : Write out all zuul information for each host] 2025-05-13 22:36:57.882348 | orchestrator -> localhost | changed 2025-05-13 22:36:57.900691 | 2025-05-13 22:36:57.900827 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-05-13 22:36:58.181816 | orchestrator | ok 2025-05-13 22:36:58.192589 | 2025-05-13 22:36:58.192738 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-05-13 22:37:13.394266 | orchestrator | changed: 2025-05-13 22:37:13.394508 | orchestrator | .d..t...... src/ 2025-05-13 22:37:13.394547 | orchestrator | .d..t...... src/github.com/ 2025-05-13 22:37:13.394574 | orchestrator | .d..t...... src/github.com/osism/ 2025-05-13 22:37:13.394598 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-05-13 22:37:13.394621 | orchestrator | RedHat.yml 2025-05-13 22:37:13.404911 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-05-13 22:37:13.404929 | orchestrator | RedHat.yml 2025-05-13 22:37:13.405022 | orchestrator | = 1.53.0"... 2025-05-13 22:37:26.282452 | orchestrator | 22:37:26.282 STDOUT terraform: - Finding hashicorp/local versions matching ">= 2.2.0"... 2025-05-13 22:37:27.845738 | orchestrator | 22:37:27.845 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-05-13 22:37:28.830362 | orchestrator | 22:37:28.830 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-05-13 22:37:30.172145 | orchestrator | 22:37:30.171 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.0.0... 2025-05-13 22:37:31.185155 | orchestrator | 22:37:31.184 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.0.0 (signed, key ID 4F80527A391BEFD2) 2025-05-13 22:37:32.500830 | orchestrator | 22:37:32.500 STDOUT terraform: - Installing hashicorp/local v2.5.2... 2025-05-13 22:37:33.231522 | orchestrator | 22:37:33.231 STDOUT terraform: - Installed hashicorp/local v2.5.2 (signed, key ID 0C0AF313E5FD9F80) 2025-05-13 22:37:33.231758 | orchestrator | 22:37:33.231 STDOUT terraform: Providers are signed by their developers. 2025-05-13 22:37:33.231769 | orchestrator | 22:37:33.231 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-05-13 22:37:33.231774 | orchestrator | 22:37:33.231 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-05-13 22:37:33.232047 | orchestrator | 22:37:33.231 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-05-13 22:37:33.232061 | orchestrator | 22:37:33.231 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-05-13 22:37:33.232069 | orchestrator | 22:37:33.231 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-05-13 22:37:33.232073 | orchestrator | 22:37:33.232 STDOUT terraform: you run "tofu init" in the future. 2025-05-13 22:37:33.232617 | orchestrator | 22:37:33.232 STDOUT terraform: OpenTofu has been successfully initialized! 2025-05-13 22:37:33.232976 | orchestrator | 22:37:33.232 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-05-13 22:37:33.232988 | orchestrator | 22:37:33.232 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-05-13 22:37:33.232993 | orchestrator | 22:37:33.232 STDOUT terraform: should now work. 2025-05-13 22:37:33.232997 | orchestrator | 22:37:33.232 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-05-13 22:37:33.233001 | orchestrator | 22:37:33.232 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-05-13 22:37:33.233007 | orchestrator | 22:37:33.232 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-05-13 22:37:33.425867 | orchestrator | 22:37:33.425 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed04/terraform` instead. 2025-05-13 22:37:33.631203 | orchestrator | 22:37:33.630 STDOUT terraform: Created and switched to workspace "ci"! 2025-05-13 22:37:33.631355 | orchestrator | 22:37:33.631 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-05-13 22:37:33.631407 | orchestrator | 22:37:33.631 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-05-13 22:37:33.631454 | orchestrator | 22:37:33.631 STDOUT terraform: for this configuration. 2025-05-13 22:37:33.886604 | orchestrator | 22:37:33.886 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed04/terraform` instead. 2025-05-13 22:37:34.001217 | orchestrator | 22:37:34.001 STDOUT terraform: ci.auto.tfvars 2025-05-13 22:37:34.006469 | orchestrator | 22:37:34.006 STDOUT terraform: default_custom.tf 2025-05-13 22:37:34.234337 | orchestrator | 22:37:34.234 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed04/terraform` instead. 2025-05-13 22:37:35.215191 | orchestrator | 22:37:35.214 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-05-13 22:37:35.736282 | orchestrator | 22:37:35.735 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-05-13 22:37:35.938199 | orchestrator | 22:37:35.934 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-05-13 22:37:35.938265 | orchestrator | 22:37:35.934 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-05-13 22:37:35.938273 | orchestrator | 22:37:35.934 STDOUT terraform:  + create 2025-05-13 22:37:35.938280 | orchestrator | 22:37:35.934 STDOUT terraform:  <= read (data resources) 2025-05-13 22:37:35.938286 | orchestrator | 22:37:35.934 STDOUT terraform: OpenTofu will perform the following actions: 2025-05-13 22:37:35.938294 | orchestrator | 22:37:35.934 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-05-13 22:37:35.938298 | orchestrator | 22:37:35.934 STDOUT terraform:  # (config refers to values not yet known) 2025-05-13 22:37:35.938303 | orchestrator | 22:37:35.935 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-05-13 22:37:35.938308 | orchestrator | 22:37:35.935 STDOUT terraform:  + checksum = (known after apply) 2025-05-13 22:37:35.938312 | orchestrator | 22:37:35.935 STDOUT terraform:  + created_at = (known after apply) 2025-05-13 22:37:35.938317 | orchestrator | 22:37:35.935 STDOUT terraform:  + file = (known after apply) 2025-05-13 22:37:35.938321 | orchestrator | 22:37:35.935 STDOUT terraform:  + id = (known after apply) 2025-05-13 22:37:35.938326 | orchestrator | 22:37:35.935 STDOUT terraform:  + metadata = (known after apply) 2025-05-13 22:37:35.938331 | orchestrator | 22:37:35.935 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-05-13 22:37:35.938335 | orchestrator | 22:37:35.935 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-05-13 22:37:35.938340 | orchestrator | 22:37:35.935 STDOUT terraform:  + most_recent = true 2025-05-13 22:37:35.938360 | orchestrator | 22:37:35.935 STDOUT terraform:  + name = (known after apply) 2025-05-13 22:37:35.938365 | orchestrator | 22:37:35.935 STDOUT terraform:  + protected = (known after apply) 2025-05-13 22:37:35.938369 | orchestrator | 22:37:35.935 STDOUT terraform:  + region = (known after apply) 2025-05-13 22:37:35.938374 | orchestrator | 22:37:35.935 STDOUT terraform:  + schema = (known after apply) 2025-05-13 22:37:35.938378 | orchestrator | 22:37:35.935 STDOUT terraform:  + size_bytes = (known after apply) 2025-05-13 22:37:35.938383 | orchestrator | 22:37:35.935 STDOUT terraform:  + tags = (known after apply) 2025-05-13 22:37:35.938388 | orchestrator | 22:37:35.936 STDOUT terraform:  + updated_at = (known after apply) 2025-05-13 22:37:35.938392 | orchestrator | 22:37:35.936 STDOUT terraform:  } 2025-05-13 22:37:35.938397 | orchestrator | 22:37:35.936 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-05-13 22:37:35.938402 | orchestrator | 22:37:35.936 STDOUT terraform:  # (config refers to values not yet known) 2025-05-13 22:37:35.938410 | orchestrator | 22:37:35.936 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-05-13 22:37:35.938414 | orchestrator | 22:37:35.936 STDOUT terraform:  + checksum = (known after apply) 2025-05-13 22:37:35.938419 | orchestrator | 22:37:35.936 STDOUT terraform:  + created_at = (known after apply) 2025-05-13 22:37:35.938424 | orchestrator | 22:37:35.936 STDOUT terraform:  + file = (known after apply) 2025-05-13 22:37:35.938428 | orchestrator | 22:37:35.936 STDOUT terraform:  + id = (known after apply) 2025-05-13 22:37:35.938433 | orchestrator | 22:37:35.936 STDOUT terraform:  + metadata = (known after apply) 2025-05-13 22:37:35.938437 | orchestrator | 22:37:35.936 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-05-13 22:37:35.938442 | orchestrator | 22:37:35.936 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-05-13 22:37:35.938446 | orchestrator | 22:37:35.936 STDOUT terraform:  + most_recent = true 2025-05-13 22:37:35.938451 | orchestrator | 22:37:35.937 STDOUT terraform:  + name = (known after apply) 2025-05-13 22:37:35.938456 | orchestrator | 22:37:35.937 STDOUT terraform:  + protected = (known after apply) 2025-05-13 22:37:35.938460 | orchestrator | 22:37:35.937 STDOUT terraform:  + region = (known after apply) 2025-05-13 22:37:35.938476 | orchestrator | 22:37:35.937 STDOUT terraform:  + schema = (known after apply) 2025-05-13 22:37:35.938488 | orchestrator | 22:37:35.937 STDOUT terraform:  + size_bytes = (known after apply) 2025-05-13 22:37:35.938493 | orchestrator | 22:37:35.937 STDOUT terraform:  + tags = (known after apply) 2025-05-13 22:37:35.938498 | orchestrator | 22:37:35.937 STDOUT terraform:  + updated_at = (known after apply) 2025-05-13 22:37:35.938502 | orchestrator | 22:37:35.937 STDOUT terraform:  } 2025-05-13 22:37:35.938507 | orchestrator | 22:37:35.937 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-05-13 22:37:35.938511 | orchestrator | 22:37:35.937 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-05-13 22:37:35.938516 | orchestrator | 22:37:35.937 STDOUT terraform:  + content = (known after apply) 2025-05-13 22:37:35.938520 | orchestrator | 22:37:35.937 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-13 22:37:35.938530 | orchestrator | 22:37:35.937 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-13 22:37:35.938535 | orchestrator | 22:37:35.937 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-13 22:37:35.938539 | orchestrator | 22:37:35.938 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-13 22:37:35.938544 | orchestrator | 22:37:35.938 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-13 22:37:35.938549 | orchestrator | 22:37:35.938 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-13 22:37:35.938556 | orchestrator | 22:37:35.938 STDOUT terraform:  + directory_permission = "0777" 2025-05-13 22:37:35.938561 | orchestrator | 22:37:35.938 STDOUT terraform:  + file_permission = "0644" 2025-05-13 22:37:35.938662 | orchestrator | 22:37:35.938 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-05-13 22:37:35.938806 | orchestrator | 22:37:35.938 STDOUT terraform:  + id = (known after apply) 2025-05-13 22:37:35.938844 | orchestrator | 22:37:35.938 STDOUT terraform:  } 2025-05-13 22:37:35.939060 | orchestrator | 22:37:35.938 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-05-13 22:37:35.939116 | orchestrator | 22:37:35.939 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-05-13 22:37:35.939213 | orchestrator | 22:37:35.939 STDOUT terraform:  + content = (known after apply) 2025-05-13 22:37:35.939293 | orchestrator | 22:37:35.939 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-13 22:37:35.939379 | orchestrator | 22:37:35.939 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-13 22:37:35.939467 | orchestrator | 22:37:35.939 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-13 22:37:35.939555 | orchestrator | 22:37:35.939 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-13 22:37:35.939641 | orchestrator | 22:37:35.939 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-13 22:37:35.939727 | orchestrator | 22:37:35.939 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-13 22:37:35.939786 | orchestrator | 22:37:35.939 STDOUT terraform:  + directory_permission = "0777" 2025-05-13 22:37:35.939847 | orchestrator | 22:37:35.939 STDOUT terraform:  + file_permission = "0644" 2025-05-13 22:37:35.939928 | orchestrator | 22:37:35.939 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-05-13 22:37:35.940112 | orchestrator | 22:37:35.939 STDOUT terraform:  + id = (known after apply) 2025-05-13 22:37:35.940144 | orchestrator | 22:37:35.940 STDOUT terraform:  } 2025-05-13 22:37:35.940200 | orchestrator | 22:37:35.940 STDOUT terraform:  # local_file.inventory will be created 2025-05-13 22:37:35.940261 | orchestrator | 22:37:35.940 STDOUT terraform:  + resource "local_file" "inventory" { 2025-05-13 22:37:35.940351 | orchestrator | 22:37:35.940 STDOUT terraform:  + content = (known after apply) 2025-05-13 22:37:35.940438 | orchestrator | 22:37:35.940 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-13 22:37:35.940523 | orchestrator | 22:37:35.940 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-13 22:37:35.940611 | orchestrator | 22:37:35.940 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-13 22:37:35.940701 | orchestrator | 22:37:35.940 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-13 22:37:35.940784 | orchestrator | 22:37:35.940 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-13 22:37:35.940871 | orchestrator | 22:37:35.940 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-13 22:37:35.940929 | orchestrator | 22:37:35.940 STDOUT terraform:  + directory_permission = "0777" 2025-05-13 22:37:35.941025 | orchestrator | 22:37:35.940 STDOUT terraform:  + file_permission = "0644" 2025-05-13 22:37:35.941088 | orchestrator | 22:37:35.941 STDOUT terraform:  + filename = "inventory.ci" 2025-05-13 22:37:35.941179 | orchestrator | 22:37:35.941 STDOUT terraform:  + id = (known after apply) 2025-05-13 22:37:35.941212 | orchestrator | 22:37:35.941 STDOUT terraform:  } 2025-05-13 22:37:35.941283 | orchestrator | 22:37:35.941 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-05-13 22:37:35.941356 | orchestrator | 22:37:35.941 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-05-13 22:37:35.941434 | orchestrator | 22:37:35.941 STDOUT terraform:  + content = (sensitive value) 2025-05-13 22:37:35.941547 | orchestrator | 22:37:35.941 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-13 22:37:35.941636 | orchestrator | 22:37:35.941 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-13 22:37:35.941724 | orchestrator | 22:37:35.941 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-13 22:37:35.941813 | orchestrator | 22:37:35.941 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-13 22:37:35.941900 | orchestrator | 22:37:35.941 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-13 22:37:35.942080 | orchestrator | 22:37:35.941 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-13 22:37:35.942142 | orchestrator | 22:37:35.942 STDOUT terraform:  + directory_permission = "0700" 2025-05-13 22:37:35.942201 | orchestrator | 22:37:35.942 STDOUT terraform:  + file_permission = "0600" 2025-05-13 22:37:35.942268 | orchestrator | 22:37:35.942 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-05-13 22:37:35.942343 | orchestrator | 22:37:35.942 STDOUT terraform:  + id = (known after apply) 2025-05-13 22:37:35.942365 | orchestrator | 22:37:35.942 STDOUT terraform:  } 2025-05-13 22:37:35.942423 | orchestrator | 22:37:35.942 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-05-13 22:37:35.942485 | orchestrator | 22:37:35.942 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-05-13 22:37:35.942526 | orchestrator | 22:37:35.942 STDOUT terraform:  + id = (known after apply) 2025-05-13 22:37:35.942549 | orchestrator | 22:37:35.942 STDOUT terraform:  } 2025-05-13 22:37:35.942648 | orchestrator | 22:37:35.942 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-05-13 22:37:35.942741 | orchestrator | 22:37:35.942 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-05-13 22:37:35.942802 | orchestrator | 22:37:35.942 STDOUT terraform:  + attachment = (known after apply) 2025-05-13 22:37:35.942839 | orchestrator | 22:37:35.942 STDOUT terraform:  + availability_zone = "nova" 2025-05-13 22:37:35.942900 | orchestrator | 22:37:35.942 STDOUT terraform:  + id = (known after apply) 2025-05-13 22:37:35.942991 | orchestrator | 22:37:35.942 STDOUT terraform:  + image_id = (known after apply) 2025-05-13 22:37:35.943059 | orchestrator | 22:37:35.942 STDOUT terraform:  + metadata = (known after apply) 2025-05-13 22:37:35.943140 | orchestrator | 22:37:35.943 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-05-13 22:37:35.943206 | orchestrator | 22:37:35.943 STDOUT terraform:  + region = (known after apply) 2025-05-13 22:37:35.943243 | orchestrator | 22:37:35.943 STDOUT terraform:  + size = 80 2025-05-13 22:37:35.943285 | orchestrator | 22:37:35.943 STDOUT terraform:  + volume_type = "ssd" 2025-05-13 22:37:35.943308 | orchestrator | 22:37:35.943 STDOUT terraform:  } 2025-05-13 22:37:35.943401 | orchestrator | 22:37:35.943 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-05-13 22:37:35.943492 | orchestrator | 22:37:35.943 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-13 22:37:35.943554 | orchestrator | 22:37:35.943 STDOUT terraform:  + attachment = (known after apply) 2025-05-13 22:37:35.943595 | orchestrator | 22:37:35.943 STDOUT terraform:  + availability_zone = "nova" 2025-05-13 22:37:35.943657 | orchestrator | 22:37:35.943 STDOUT terraform:  + id = (known after apply) 2025-05-13 22:37:35.943719 | orchestrator | 22:37:35.943 STDOUT terraform:  + image_id = (known after apply) 2025-05-13 22:37:35.943781 | orchestrator | 22:37:35.943 STDOUT terraform:  + metadata = (known after apply) 2025-05-13 22:37:35.943858 | orchestrator | 22:37:35.943 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-05-13 22:37:35.943920 | orchestrator | 22:37:35.943 STDOUT terraform:  + region = (known after apply) 2025-05-13 22:37:35.943975 | orchestrator | 22:37:35.943 STDOUT terraform:  + size = 80 2025-05-13 22:37:35.944022 | orchestrator | 22:37:35.943 STDOUT terraform:  + volume_type = "ssd" 2025-05-13 22:37:35.944045 | orchestrator | 22:37:35.944 STDOUT terraform:  } 2025-05-13 22:37:35.944135 | orchestrator | 22:37:35.944 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-05-13 22:37:35.944229 | orchestrator | 22:37:35.944 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-13 22:37:35.944291 | orchestrator | 22:37:35.944 STDOUT terraform:  + attachment = (known after apply) 2025-05-13 22:37:35.944332 | orchestrator | 22:37:35.944 STDOUT terraform:  + availability_zone = "nova" 2025-05-13 22:37:35.944398 | orchestrator | 22:37:35.944 STDOUT terraform:  + id = (known after apply) 2025-05-13 22:37:35.944455 | orchestrator | 22:37:35.944 STDOUT terraform:  + image_id = (known after apply) 2025-05-13 22:37:35.944517 | orchestrator | 22:37:35.944 STDOUT terraform:  + metadata = (known after apply) 2025-05-13 22:37:35.944594 | orchestrator | 22:37:35.944 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-05-13 22:37:35.944655 | orchestrator | 22:37:35.944 STDOUT terraform:  + region = (known after apply) 2025-05-13 22:37:35.944698 | orchestrator | 22:37:35.944 STDOUT terraform:  + size = 80 2025-05-13 22:37:35.944740 | orchestrator | 22:37:35.944 STDOUT terraform:  + volume_type = "ssd" 2025-05-13 22:37:35.944763 | orchestrator | 22:37:35.944 STDOUT terraform:  } 2025-05-13 22:37:35.944853 | orchestrator | 22:37:35.944 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-05-13 22:37:35.944943 | orchestrator | 22:37:35.944 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-13 22:37:35.945048 | orchestrator | 22:37:35.944 STDOUT terraform:  + attachment = (known after apply) 2025-05-13 22:37:35.945092 | orchestrator | 22:37:35.945 STDOUT terraform:  + availability_zone = "nova" 2025-05-13 22:37:35.945157 | orchestrator | 22:37:35.945 STDOUT terraform:  + id = (known after apply) 2025-05-13 22:37:35.945219 | orchestrator | 22:37:35.945 STDOUT terraform:  + image_id = (known after apply) 2025-05-13 22:37:35.945281 | orchestrator | 22:37:35.945 STDOUT terraform:  + metadata = (known after apply) 2025-05-13 22:37:35.945348 | orchestrator | 22:37:35.945 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-05-13 22:37:35.945400 | orchestrator | 22:37:35.945 STDOUT terraform:  + region = (known after apply) 2025-05-13 22:37:35.945434 | orchestrator | 22:37:35.945 STDOUT terraform:  + size = 80 2025-05-13 22:37:35.945469 | orchestrator | 22:37:35.945 STDOUT terraform:  + volume_type = "ssd" 2025-05-13 22:37:35.945478 | orchestrator | 22:37:35.945 STDOUT terraform:  } 2025-05-13 22:37:35.945566 | orchestrator | 22:37:35.945 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-05-13 22:37:35.945646 | orchestrator | 22:37:35.945 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-13 22:37:35.945699 | orchestrator | 22:37:35.945 STDOUT terraform:  + attachment = (known after apply) 2025-05-13 22:37:35.945734 | orchestrator | 22:37:35.945 STDOUT terraform:  + availability_zone = "nova" 2025-05-13 22:37:35.945786 | orchestrator | 22:37:35.945 STDOUT terraform:  + id = (known after apply) 2025-05-13 22:37:35.945840 | orchestrator | 22:37:35.945 STDOUT terraform:  + image_id = (known after apply) 2025-05-13 22:37:35.945892 | orchestrator | 22:37:35.945 STDOUT terraform:  + metadata = (known after apply) 2025-05-13 22:37:35.945970 | orchestrator | 22:37:35.945 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-05-13 22:37:35.946029 | orchestrator | 22:37:35.945 STDOUT terraform:  + region = (known after apply) 2025-05-13 22:37:35.946087 | orchestrator | 22:37:35.946 STDOUT terraform:  + size = 80 2025-05-13 22:37:35.946172 | orchestrator | 22:37:35.946 STDOUT terraform:  + volume_type = "ssd" 2025-05-13 22:37:35.946181 | orchestrator | 22:37:35.946 STDOUT terraform:  } 2025-05-13 22:37:35.946219 | orchestrator | 22:37:35.946 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-05-13 22:37:35.946298 | orchestrator | 22:37:35.946 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-13 22:37:35.946350 | orchestrator | 22:37:35.946 STDOUT terraform:  + attachment = (known after apply) 2025-05-13 22:37:35.946386 | orchestrator | 22:37:35.946 STDOUT terraform:  + availability_zone = "nova" 2025-05-13 22:37:35.946439 | orchestrator | 22:37:35.946 STDOUT terraform:  + id = (known after apply) 2025-05-13 22:37:35.946492 | orchestrator | 22:37:35.946 STDOUT terraform:  + image_id = (known after apply) 2025-05-13 22:37:35.946545 | orchestrator | 22:37:35.946 STDOUT terraform:  + metadata = (known after apply) 2025-05-13 22:37:35.946610 | orchestrator | 22:37:35.946 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-05-13 22:37:35.946664 | orchestrator | 22:37:35.946 STDOUT terraform:  + region = (known after apply) 2025-05-13 22:37:35.946699 | orchestrator | 22:37:35.946 STDOUT terraform:  + size = 80 2025-05-13 22:37:35.946734 | orchestrator | 22:37:35.946 STDOUT terraform:  + volume_type = "ssd" 2025-05-13 22:37:35.946763 | orchestrator | 22:37:35.946 STDOUT terraform:  } 2025-05-13 22:37:35.946837 | orchestrator | 22:37:35.946 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-05-13 22:37:35.946914 | orchestrator | 22:37:35.946 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-13 22:37:35.947135 | orchestrator | 22:37:35.946 STDOUT terraform:  + attachment = (known after apply) 2025-05-13 22:37:35.947244 | orchestrator | 22:37:35.946 STDOUT terraform:  + availability_zone = "nova" 2025-05-13 22:37:35.947262 | orchestrator | 22:37:35.947 STDOUT terraform:  + id = (known after apply) 2025-05-13 22:37:35.947286 | orchestrator | 22:37:35.947 STDOUT terraform:  + image_id = (known after apply) 2025-05-13 22:37:35.947298 | orchestrator | 22:37:35.947 STDOUT terraform:  + metadata = (known after apply) 2025-05-13 22:37:35.947308 | orchestrator | 22:37:35.947 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-05-13 22:37:35.947319 | orchestrator | 22:37:35.947 STDOUT terraform:  + region = (known after apply) 2025-05-13 22:37:35.947334 | orchestrator | 22:37:35.947 STDOUT terraform:  + size = 80 2025-05-13 22:37:35.947345 | orchestrator | 22:37:35.947 STDOUT terraform:  + volume_type = "ssd" 2025-05-13 22:37:35.947360 | orchestrator | 22:37:35.947 STDOUT terraform:  } 2025-05-13 22:37:35.947445 | orchestrator | 22:37:35.947 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-05-13 22:37:35.947516 | orchestrator | 22:37:35.947 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-13 22:37:35.947568 | orchestrator | 22:37:35.947 STDOUT terraform:  + attachment = (known after apply) 2025-05-13 22:37:35.947585 | orchestrator | 22:37:35.947 STDOUT terraform:  + availability_zone = "nova" 2025-05-13 22:37:35.947651 | orchestrator | 22:37:35.947 STDOUT terraform:  + id = (known after apply) 2025-05-13 22:37:35.947704 | orchestrator | 22:37:35.947 STDOUT terraform:  + metadata = (known after apply) 2025-05-13 22:37:35.947770 | orchestrator | 22:37:35.947 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-05-13 22:37:35.951106 | orchestrator | 22:37:35.947 STDOUT terraform:  + region = (known after apply) 2025-05-13 22:37:35.951183 | orchestrator | 22:37:35.951 STDOUT terraform:  + size = 20 2025-05-13 22:37:35.951197 | orchestrator | 22:37:35.951 STDOUT terraform:  + volume_type = "ssd" 2025-05-13 22:37:35.951208 | orchestrator | 22:37:35.951 STDOUT terraform:  } 2025-05-13 22:37:35.951241 | orchestrator | 22:37:35.951 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-05-13 22:37:35.951308 | orchestrator | 22:37:35.951 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-13 22:37:35.951368 | orchestrator | 22:37:35.951 STDOUT terraform:  + attachment = (known after apply) 2025-05-13 22:37:35.951385 | orchestrator | 22:37:35.951 STDOUT terraform:  + availability_zone = "nova" 2025-05-13 22:37:35.951424 | orchestrator | 22:37:35.951 STDOUT terraform:  + id = (known after apply) 2025-05-13 22:37:35.951474 | orchestrator | 22:37:35.951 STDOUT terraform:  + metadata = (known after apply) 2025-05-13 22:37:35.951549 | orchestrator | 22:37:35.951 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-05-13 22:37:35.951566 | orchestrator | 22:37:35.951 STDOUT terraform:  + region = (known after apply) 2025-05-13 22:37:35.951605 | orchestrator | 22:37:35.951 STDOUT terraform:  + size = 20 2025-05-13 22:37:35.951632 | orchestrator | 22:37:35.951 STDOUT terraform:  + volume_type = "ssd" 2025-05-13 22:37:35.951647 | orchestrator | 22:37:35.951 STDOUT terraform:  } 2025-05-13 22:37:35.951721 | orchestrator | 22:37:35.951 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-05-13 22:37:35.951770 | orchestrator | 22:37:35.951 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-13 22:37:35.951813 | orchestrator | 22:37:35.951 STDOUT terraform:  + attachment = (known after apply) 2025-05-13 22:37:35.951843 | orchestrator | 22:37:35.951 STDOUT terraform:  + availability_zone = "nova" 2025-05-13 22:37:35.951887 | orchestrator | 22:37:35.951 STDOUT terraform:  + id = (known after apply) 2025-05-13 22:37:35.951939 | orchestrator | 22:37:35.951 STDOUT terraform:  + metadata = (known after apply) 2025-05-13 22:37:35.952007 | orchestrator | 22:37:35.951 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-05-13 22:37:35.952041 | orchestrator | 22:37:35.951 STDOUT terraform:  + region = (known after apply) 2025-05-13 22:37:35.952109 | orchestrator | 22:37:35.952 STDOUT terraform:  + size = 20 2025-05-13 22:37:35.952125 | orchestrator | 22:37:35.952 STDOUT terraform:  + volume_type = "ssd" 2025-05-13 22:37:35.952140 | orchestrator | 22:37:35.952 STDOUT terraform:  } 2025-05-13 22:37:35.952218 | orchestrator | 22:37:35.952 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-05-13 22:37:35.952277 | orchestrator | 22:37:35.952 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-13 22:37:35.952317 | orchestrator | 22:37:35.952 STDOUT terraform:  + attachment = (known after apply) 2025-05-13 22:37:35.952333 | orchestrator | 22:37:35.952 STDOUT terraform:  + availability_zone = "nova" 2025-05-13 22:37:35.952384 | orchestrator | 22:37:35.952 STDOUT terraform:  + id = (known after apply) 2025-05-13 22:37:35.952415 | orchestrator | 22:37:35.952 STDOUT terraform:  + metadata = (known after apply) 2025-05-13 22:37:35.952476 | orchestrator | 22:37:35.952 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-05-13 22:37:35.952515 | orchestrator | 22:37:35.952 STDOUT terraform:  + region = (known after apply) 2025-05-13 22:37:35.952531 | orchestrator | 22:37:35.952 STDOUT terraform:  + size = 20 2025-05-13 22:37:35.952545 | orchestrator | 22:37:35.952 STDOUT terraform:  + volume_type = "ssd" 2025-05-13 22:37:35.952559 | orchestrator | 22:37:35.952 STDOUT terraform:  } 2025-05-13 22:37:35.952642 | orchestrator | 22:37:35.952 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-05-13 22:37:35.952702 | orchestrator | 22:37:35.952 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-13 22:37:35.952741 | orchestrator | 22:37:35.952 STDOUT terraform:  + attachment = (known after apply) 2025-05-13 22:37:35.952756 | orchestrator | 22:37:35.952 STDOUT terraform:  + availability_zone = "nova" 2025-05-13 22:37:35.952810 | orchestrator | 22:37:35.952 STDOUT terraform:  + id = (known after apply) 2025-05-13 22:37:35.952849 | orchestrator | 22:37:35.952 STDOUT terraform:  + metadata = (known after apply) 2025-05-13 22:37:35.952913 | orchestrator | 22:37:35.952 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-05-13 22:37:35.952985 | orchestrator | 22:37:35.952 STDOUT terraform:  + region = (known after apply) 2025-05-13 22:37:35.953011 | orchestrator | 22:37:35.952 STDOUT terraform:  + size = 20 2025-05-13 22:37:35.953026 | orchestrator | 22:37:35.952 STDOUT terraform:  + volume_type = "ssd" 2025-05-13 22:37:35.953040 | orchestrator | 22:37:35.953 STDOUT terraform:  } 2025-05-13 22:37:35.953106 | orchestrator | 22:37:35.953 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-05-13 22:37:35.953166 | orchestrator | 22:37:35.953 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-13 22:37:35.953206 | orchestrator | 22:37:35.953 STDOUT terraform:  + attachment = (known after apply) 2025-05-13 22:37:35.953221 | orchestrator | 22:37:35.953 STDOUT terraform:  + availability_zone = "nova" 2025-05-13 22:37:35.953276 | orchestrator | 22:37:35.953 STDOUT terraform:  + id = (known after apply) 2025-05-13 22:37:35.953315 | orchestrator | 22:37:35.953 STDOUT terraform:  + metadata = (known after apply) 2025-05-13 22:37:35.953368 | orchestrator | 22:37:35.953 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-05-13 22:37:35.953418 | orchestrator | 22:37:35.953 STDOUT terraform:  + region = (known after apply) 2025-05-13 22:37:35.953457 | orchestrator | 22:37:35.953 STDOUT terraform:  + size = 20 2025-05-13 22:37:35.953495 | orchestrator | 22:37:35.953 STDOUT terraform:  + volume_type = "ssd" 2025-05-13 22:37:35.953511 | orchestrator | 22:37:35.953 STDOUT terraform:  } 2025-05-13 22:37:35.953612 | orchestrator | 22:37:35.953 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-05-13 22:37:35.953679 | orchestrator | 22:37:35.953 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-13 22:37:35.953705 | orchestrator | 22:37:35.953 STDOUT terraform:  + attachment = (known after apply) 2025-05-13 22:37:35.953744 | orchestrator | 22:37:35.953 STDOUT terraform:  + availability_zone = "nova" 2025-05-13 22:37:35.953783 | orchestrator | 22:37:35.953 STDOUT terraform:  + id = (known after apply) 2025-05-13 22:37:35.953822 | orchestrator | 22:37:35.953 STDOUT terraform:  + metadata = (known after apply) 2025-05-13 22:37:35.953880 | orchestrator | 22:37:35.953 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-05-13 22:37:35.953918 | orchestrator | 22:37:35.953 STDOUT terraform:  + region = (known after apply) 2025-05-13 22:37:35.953934 | orchestrator | 22:37:35.953 STDOUT terraform:  + size = 20 2025-05-13 22:37:35.954044 | orchestrator | 22:37:35.953 STDOUT terraform:  + volume_type = "ssd" 2025-05-13 22:37:35.954062 | orchestrator | 22:37:35.953 STDOUT terraform:  } 2025-05-13 22:37:35.954113 | orchestrator | 22:37:35.954 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-05-13 22:37:35.954260 | orchestrator | 22:37:35.954 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-13 22:37:35.954293 | orchestrator | 22:37:35.954 STDOUT terraform:  + attachment = (known after apply) 2025-05-13 22:37:35.954304 | orchestrator | 22:37:35.954 STDOUT terraform:  + availability_zone = "nova" 2025-05-13 22:37:35.954321 | orchestrator | 22:37:35.954 STDOUT terraform:  + id = (known after apply) 2025-05-13 22:37:35.954331 | orchestrator | 22:37:35.954 STDOUT terraform:  + metadata = (known after apply) 2025-05-13 22:37:35.954343 | orchestrator | 22:37:35.954 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-05-13 22:37:35.954376 | orchestrator | 22:37:35.954 STDOUT terraform:  + region = (known after apply) 2025-05-13 22:37:35.954385 | orchestrator | 22:37:35.954 STDOUT terraform:  + size = 20 2025-05-13 22:37:35.954420 | orchestrator | 22:37:35.954 STDOUT terraform:  + volume_type = "ssd" 2025-05-13 22:37:35.954429 | orchestrator | 22:37:35.954 STDOUT terraform:  } 2025-05-13 22:37:35.954487 | orchestrator | 22:37:35.954 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-05-13 22:37:35.954548 | orchestrator | 22:37:35.954 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-13 22:37:35.954579 | orchestrator | 22:37:35.954 STDOUT terraform:  + attachment = (known after apply) 2025-05-13 22:37:35.954602 | orchestrator | 22:37:35.954 STDOUT terraform:  + availability_zone = "nova" 2025-05-13 22:37:35.954641 | orchestrator | 22:37:35.954 STDOUT terraform:  + id = (known after apply) 2025-05-13 22:37:35.954680 | orchestrator | 22:37:35.954 STDOUT terraform:  + metadata = (known after apply) 2025-05-13 22:37:35.954729 | orchestrator | 22:37:35.954 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-05-13 22:37:35.954767 | orchestrator | 22:37:35.954 STDOUT terraform:  + region = (known after apply) 2025-05-13 22:37:35.954789 | orchestrator | 22:37:35.954 STDOUT terraform:  + size = 20 2025-05-13 22:37:35.954810 | orchestrator | 22:37:35.954 STDOUT terraform:  + volume_type = "ssd" 2025-05-13 22:37:35.954826 | orchestrator | 22:37:35.954 STDOUT terraform:  } 2025-05-13 22:37:35.954877 | orchestrator | 22:37:35.954 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-05-13 22:37:35.954932 | orchestrator | 22:37:35.954 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-05-13 22:37:35.954993 | orchestrator | 22:37:35.954 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-13 22:37:35.955039 | orchestrator | 22:37:35.954 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-13 22:37:35.955088 | orchestrator | 22:37:35.955 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-13 22:37:35.955132 | orchestrator | 22:37:35.955 STDOUT terraform:  + all_tags = (known after apply) 2025-05-13 22:37:35.955147 | orchestrator | 22:37:35.955 STDOUT terraform:  + availability_zone = "nova" 2025-05-13 22:37:35.955178 | orchestrator | 22:37:35.955 STDOUT terraform:  + config_drive = true 2025-05-13 22:37:35.955223 | orchestrator | 22:37:35.955 STDOUT terraform:  + created = (known after apply) 2025-05-13 22:37:35.955264 | orchestrator | 22:37:35.955 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-13 22:37:35.955306 | orchestrator | 22:37:35.955 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-05-13 22:37:35.955344 | orchestrator | 22:37:35.955 STDOUT terraform:  + force_delete = false 2025-05-13 22:37:35.955378 | orchestrator | 22:37:35.955 STDOUT terraform:  + id = (known after apply) 2025-05-13 22:37:35.955422 | orchestrator | 22:37:35.955 STDOUT terraform:  + image_id = (known after apply) 2025-05-13 22:37:35.955466 | orchestrator | 22:37:35.955 STDOUT terraform:  + image_name = (known after apply) 2025-05-13 22:37:35.955498 | orchestrator | 22:37:35.955 STDOUT terraform:  + key_pair = "testbed" 2025-05-13 22:37:35.955537 | orchestrator | 22:37:35.955 STDOUT terraform:  + name = "testbed-manager" 2025-05-13 22:37:35.955569 | orchestrator | 22:37:35.955 STDOUT terraform:  + power_state = "active" 2025-05-13 22:37:35.955612 | orchestrator | 22:37:35.955 STDOUT terraform:  + region = (known after apply) 2025-05-13 22:37:35.955655 | orchestrator | 22:37:35.955 STDOUT terraform:  + security_groups = (known after apply) 2025-05-13 22:37:35.955680 | orchestrator | 22:37:35.955 STDOUT terraform:  + stop_before_destroy = false 2025-05-13 22:37:35.955731 | orchestrator | 22:37:35.955 STDOUT terraform:  + updated = (known after apply) 2025-05-13 22:37:35.955770 | orchestrator | 22:37:35.955 STDOUT terraform:  + user_data = (known after apply) 2025-05-13 22:37:35.955786 | orchestrator | 22:37:35.955 STDOUT terraform:  + block_device { 2025-05-13 22:37:35.955811 | orchestrator | 22:37:35.955 STDOUT terraform:  + boot_index = 0 2025-05-13 22:37:35.955844 | orchestrator | 22:37:35.955 STDOUT terraform:  + delete_on_termination = false 2025-05-13 22:37:35.955880 | orchestrator | 22:37:35.955 STDOUT terraform:  + destination_type = "volume" 2025-05-13 22:37:35.955918 | orchestrator | 22:37:35.955 STDOUT terraform:  + multiattach = false 2025-05-13 22:37:35.955971 | orchestrator | 22:37:35.955 STDOUT terraform:  + source_type = "volume" 2025-05-13 22:37:35.956018 | orchestrator | 22:37:35.955 STDOUT terraform:  + uuid = (known after apply) 2025-05-13 22:37:35.956028 | orchestrator | 22:37:35.956 STDOUT terraform:  } 2025-05-13 22:37:35.956042 | orchestrator | 22:37:35.956 STDOUT terraform:  + network { 2025-05-13 22:37:35.956066 | orchestrator | 22:37:35.956 STDOUT terraform:  + access_network = false 2025-05-13 22:37:35.956105 | orchestrator | 22:37:35.956 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-13 22:37:35.956146 | orchestrator | 22:37:35.956 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-13 22:37:35.956185 | orchestrator | 22:37:35.956 STDOUT terraform:  + mac = (known after apply) 2025-05-13 22:37:35.956223 | orchestrator | 22:37:35.956 STDOUT terraform:  + name = (known after apply) 2025-05-13 22:37:35.956275 | orchestrator | 22:37:35.956 STDOUT terraform:  + port = (known after apply) 2025-05-13 22:37:35.956314 | orchestrator | 22:37:35.956 STDOUT terraform:  + uuid = (known after apply) 2025-05-13 22:37:35.956324 | orchestrator | 22:37:35.956 STDOUT terraform:  } 2025-05-13 22:37:35.956332 | orchestrator | 22:37:35.956 STDOUT terraform:  } 2025-05-13 22:37:35.956395 | orchestrator | 22:37:35.956 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-05-13 22:37:35.956448 | orchestrator | 22:37:35.956 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-13 22:37:35.956493 | orchestrator | 22:37:35.956 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-13 22:37:35.956536 | orchestrator | 22:37:35.956 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-13 22:37:35.956581 | orchestrator | 22:37:35.956 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-13 22:37:35.956625 | orchestrator | 22:37:35.956 STDOUT terraform:  + all_tags = (known after apply) 2025-05-13 22:37:35.956656 | orchestrator | 22:37:35.956 STDOUT terraform:  + availability_zone = "nova" 2025-05-13 22:37:35.956681 | orchestrator | 22:37:35.956 STDOUT terraform:  + config_drive = true 2025-05-13 22:37:35.956723 | orchestrator | 22:37:35.956 STDOUT terraform:  + created = (known after apply) 2025-05-13 22:37:35.956768 | orchestrator | 22:37:35.956 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-13 22:37:35.956813 | orchestrator | 22:37:35.956 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-13 22:37:35.956844 | orchestrator | 22:37:35.956 STDOUT terraform:  + force_delete = false 2025-05-13 22:37:35.956889 | orchestrator | 22:37:35.956 STDOUT terraform:  + id = (known after apply) 2025-05-13 22:37:35.956933 | orchestrator | 22:37:35.956 STDOUT terraform:  + image_id = (known after apply) 2025-05-13 22:37:35.957084 | orchestrator | 22:37:35.956 STDOUT terraform:  + image_name = (known after apply) 2025-05-13 22:37:35.957124 | orchestrator | 22:37:35.956 STDOUT terraform:  + key_pair = "testbed" 2025-05-13 22:37:35.957138 | orchestrator | 22:37:35.957 STDOUT terraform:  + name = "testbed-node-0" 2025-05-13 22:37:35.957157 | orchestrator | 22:37:35.957 STDOUT terraform:  + power_state = "active" 2025-05-13 22:37:35.957185 | orchestrator | 22:37:35.957 STDOUT terraform:  + region = (known after apply) 2025-05-13 22:37:35.957195 | orchestrator | 22:37:35.957 STDOUT terraform:  + security_groups = (known after apply) 2025-05-13 22:37:35.957208 | orchestrator | 22:37:35.957 STDOUT terraform:  + stop_before_destroy = false 2025-05-13 22:37:35.957222 | orchestrator | 22:37:35.957 STDOUT terraform:  + updated = (known after apply) 2025-05-13 22:37:35.957297 | orchestrator | 22:37:35.957 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-13 22:37:35.957313 | orchestrator | 22:37:35.957 STDOUT terraform:  + block_device { 2025-05-13 22:37:35.957326 | orchestrator | 22:37:35.957 STDOUT terraform:  + boot_index = 0 2025-05-13 22:37:35.957372 | orchestrator | 22:37:35.957 STDOUT terraform:  + delete_on_termination = false 2025-05-13 22:37:35.957408 | orchestrator | 22:37:35.957 STDOUT terraform:  + destination_type = "volume" 2025-05-13 22:37:35.957423 | orchestrator | 22:37:35.957 STDOUT terraform:  + multiattach = false 2025-05-13 22:37:35.957475 | orchestrator | 22:37:35.957 STDOUT terraform:  + source_type = "volume" 2025-05-13 22:37:35.957525 | orchestrator | 22:37:35.957 STDOUT terraform:  + uuid = (known after apply) 2025-05-13 22:37:35.957539 | orchestrator | 22:37:35.957 STDOUT terraform:  } 2025-05-13 22:37:35.957553 | orchestrator | 22:37:35.957 STDOUT terraform:  + network { 2025-05-13 22:37:35.957567 | orchestrator | 22:37:35.957 STDOUT terraform:  + access_network = false 2025-05-13 22:37:35.957606 | orchestrator | 22:37:35.957 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-13 22:37:35.957645 | orchestrator | 22:37:35.957 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-13 22:37:35.957661 | orchestrator | 22:37:35.957 STDOUT terraform:  + mac = (known after apply) 2025-05-13 22:37:35.957726 | orchestrator | 22:37:35.957 STDOUT terraform:  + name = (known after apply) 2025-05-13 22:37:35.957742 | orchestrator | 22:37:35.957 STDOUT terraform:  + port = (known after apply) 2025-05-13 22:37:35.957782 | orchestrator | 22:37:35.957 STDOUT terraform:  + uuid = (known after apply) 2025-05-13 22:37:35.957798 | orchestrator | 22:37:35.957 STDOUT terraform:  } 2025-05-13 22:37:35.957810 | orchestrator | 22:37:35.957 STDOUT terraform:  } 2025-05-13 22:37:35.957871 | orchestrator | 22:37:35.957 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-05-13 22:37:35.957926 | orchestrator | 22:37:35.957 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-13 22:37:35.957996 | orchestrator | 22:37:35.957 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-13 22:37:35.958055 | orchestrator | 22:37:35.957 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-13 22:37:35.958107 | orchestrator | 22:37:35.958 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-13 22:37:35.958147 | orchestrator | 22:37:35.958 STDOUT terraform:  + all_tags = (known after apply) 2025-05-13 22:37:35.958163 | orchestrator | 22:37:35.958 STDOUT terraform:  + availability_zone = "nova" 2025-05-13 22:37:35.958188 | orchestrator | 22:37:35.958 STDOUT terraform:  + config_drive = true 2025-05-13 22:37:35.958243 | orchestrator | 22:37:35.958 STDOUT terraform:  + created = (known after apply) 2025-05-13 22:37:35.958283 | orchestrator | 22:37:35.958 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-13 22:37:35.958299 | orchestrator | 22:37:35.958 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-13 22:37:35.958338 | orchestrator | 22:37:35.958 STDOUT terraform:  + force_delete = false 2025-05-13 22:37:35.958388 | orchestrator | 22:37:35.958 STDOUT terraform:  + id = (known after apply) 2025-05-13 22:37:35.958429 | orchestrator | 22:37:35.958 STDOUT terraform:  + image_id = (known after apply) 2025-05-13 22:37:35.958468 | orchestrator | 22:37:35.958 STDOUT terraform:  + image_name = (known after apply) 2025-05-13 22:37:35.958483 | orchestrator | 22:37:35.958 STDOUT terraform:  + key_pair = "testbed" 2025-05-13 22:37:35.958534 | orchestrator | 22:37:35.958 STDOUT terraform:  + name = "testbed-node-1" 2025-05-13 22:37:35.958550 | orchestrator | 22:37:35.958 STDOUT terraform:  + power_state = "active" 2025-05-13 22:37:35.958603 | orchestrator | 22:37:35.958 STDOUT terraform:  + region = (known after apply) 2025-05-13 22:37:35.958643 | orchestrator | 22:37:35.958 STDOUT terraform:  + security_groups = (known after apply) 2025-05-13 22:37:35.958659 | orchestrator | 22:37:35.958 STDOUT terraform:  + stop_before_destroy = false 2025-05-13 22:37:35.958724 | orchestrator | 22:37:35.958 STDOUT terraform:  + updated = (known after apply) 2025-05-13 22:37:35.958818 | orchestrator | 22:37:35.958 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-13 22:37:35.958834 | orchestrator | 22:37:35.958 STDOUT terraform:  + block_device { 2025-05-13 22:37:35.958849 | orchestrator | 22:37:35.958 STDOUT terraform:  + boot_index = 0 2025-05-13 22:37:35.958887 | orchestrator | 22:37:35.958 STDOUT terraform:  + delete_on_termination = false 2025-05-13 22:37:35.958912 | orchestrator | 22:37:35.958 STDOUT terraform:  + destination_type = "volume" 2025-05-13 22:37:35.958951 | orchestrator | 22:37:35.958 STDOUT terraform:  + multiattach = false 2025-05-13 22:37:35.959002 | orchestrator | 22:37:35.958 STDOUT terraform:  + source_type = "volume" 2025-05-13 22:37:35.959054 | orchestrator | 22:37:35.958 STDOUT terraform:  + uuid = (known after apply) 2025-05-13 22:37:35.959068 | orchestrator | 22:37:35.959 STDOUT terraform:  } 2025-05-13 22:37:35.959082 | orchestrator | 22:37:35.959 STDOUT terraform:  + network { 2025-05-13 22:37:35.959097 | orchestrator | 22:37:35.959 STDOUT terraform:  + access_network = false 2025-05-13 22:37:35.959134 | orchestrator | 22:37:35.959 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-13 22:37:35.959150 | orchestrator | 22:37:35.959 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-13 22:37:35.959202 | orchestrator | 22:37:35.959 STDOUT terraform:  + mac = (known after apply) 2025-05-13 22:37:35.959252 | orchestrator | 22:37:35.959 STDOUT terraform:  + name = (known after apply) 2025-05-13 22:37:35.959277 | orchestrator | 22:37:35.959 STDOUT terraform:  + port = (known after apply) 2025-05-13 22:37:35.959292 | orchestrator | 22:37:35.959 STDOUT terraform:  + uuid = (known after apply) 2025-05-13 22:37:35.959306 | orchestrator | 22:37:35.959 STDOUT terraform:  } 2025-05-13 22:37:35.959320 | orchestrator | 22:37:35.959 STDOUT terraform:  } 2025-05-13 22:37:35.959390 | orchestrator | 22:37:35.959 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-05-13 22:37:35.959440 | orchestrator | 22:37:35.959 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-13 22:37:35.959480 | orchestrator | 22:37:35.959 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-13 22:37:35.959530 | orchestrator | 22:37:35.959 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-13 22:37:35.959546 | orchestrator | 22:37:35.959 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-13 22:37:35.959599 | orchestrator | 22:37:35.959 STDOUT terraform:  + all_tags = (known after apply) 2025-05-13 22:37:35.959622 | orchestrator | 22:37:35.959 STDOUT terraform:  + availability_zone = "nova" 2025-05-13 22:37:35.959637 | orchestrator | 22:37:35.959 STDOUT terraform:  + config_drive = true 2025-05-13 22:37:35.959688 | orchestrator | 22:37:35.959 STDOUT terraform:  + created = (known after apply) 2025-05-13 22:37:35.959713 | orchestrator | 22:37:35.959 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-13 22:37:35.959756 | orchestrator | 22:37:35.959 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-13 22:37:35.959773 | orchestrator | 22:37:35.959 STDOUT terraform:  + force_delete = false 2025-05-13 22:37:35.959820 | orchestrator | 22:37:35.959 STDOUT terraform:  + id = (known after apply) 2025-05-13 22:37:35.959859 | orchestrator | 22:37:35.959 STDOUT terraform:  + image_id = (known after apply) 2025-05-13 22:37:35.959897 | orchestrator | 22:37:35.959 STDOUT terraform:  + image_name = (known after apply) 2025-05-13 22:37:35.959913 | orchestrator | 22:37:35.959 STDOUT terraform:  + key_pair = "testbed" 2025-05-13 22:37:35.959990 | orchestrator | 22:37:35.959 STDOUT terraform:  + name = "testbed-node-2" 2025-05-13 22:37:35.960013 | orchestrator | 22:37:35.959 STDOUT terraform:  + power_state = "active" 2025-05-13 22:37:35.960038 | orchestrator | 22:37:35.959 STDOUT terraform:  + region = (known after apply) 2025-05-13 22:37:35.960093 | orchestrator | 22:37:35.960 STDOUT terraform:  + security_groups = (known after apply) 2025-05-13 22:37:35.960105 | orchestrator | 22:37:35.960 STDOUT terraform:  + stop_before_destroy = false 2025-05-13 22:37:35.960143 | orchestrator | 22:37:35.960 STDOUT terraform:  + updated = (known after apply) 2025-05-13 22:37:35.960200 | orchestrator | 22:37:35.960 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-13 22:37:35.960217 | orchestrator | 22:37:35.960 STDOUT terraform:  + block_device { 2025-05-13 22:37:35.960231 | orchestrator | 22:37:35.960 STDOUT terraform:  + boot_index = 0 2025-05-13 22:37:35.960269 | orchestrator | 22:37:35.960 STDOUT terraform:  + delete_on_termination = false 2025-05-13 22:37:35.960295 | orchestrator | 22:37:35.960 STDOUT terraform:  + destination_type = "volume" 2025-05-13 22:37:35.960334 | orchestrator | 22:37:35.960 STDOUT terraform:  + multiattach = false 2025-05-13 22:37:35.960349 | orchestrator | 22:37:35.960 STDOUT terraform:  + source_type = "volume" 2025-05-13 22:37:35.960406 | orchestrator | 22:37:35.960 STDOUT terraform:  + uuid = (known after apply) 2025-05-13 22:37:35.960419 | orchestrator | 22:37:35.960 STDOUT terraform:  } 2025-05-13 22:37:35.960434 | orchestrator | 22:37:35.960 STDOUT terraform:  + network { 2025-05-13 22:37:35.960448 | orchestrator | 22:37:35.960 STDOUT terraform:  + access_network = false 2025-05-13 22:37:35.960486 | orchestrator | 22:37:35.960 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-13 22:37:35.960502 | orchestrator | 22:37:35.960 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-13 22:37:35.960554 | orchestrator | 22:37:35.960 STDOUT terraform:  + mac = (known after apply) 2025-05-13 22:37:35.960603 | orchestrator | 22:37:35.960 STDOUT terraform:  + name = (known after apply) 2025-05-13 22:37:35.960619 | orchestrator | 22:37:35.960 STDOUT terraform:  + port = (known after apply) 2025-05-13 22:37:35.960657 | orchestrator | 22:37:35.960 STDOUT terraform:  + uuid = (known after apply) 2025-05-13 22:37:35.960672 | orchestrator | 22:37:35.960 STDOUT terraform:  } 2025-05-13 22:37:35.960683 | orchestrator | 22:37:35.960 STDOUT terraform:  } 2025-05-13 22:37:35.960737 | orchestrator | 22:37:35.960 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-05-13 22:37:35.960787 | orchestrator | 22:37:35.960 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-13 22:37:35.960826 | orchestrator | 22:37:35.960 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-13 22:37:35.960864 | orchestrator | 22:37:35.960 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-13 22:37:35.960903 | orchestrator | 22:37:35.960 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-13 22:37:35.960941 | orchestrator | 22:37:35.960 STDOUT terraform:  + all_tags = (known after apply) 2025-05-13 22:37:35.960990 | orchestrator | 22:37:35.960 STDOUT terraform:  + availability_zone = "nova" 2025-05-13 22:37:35.961006 | orchestrator | 22:37:35.960 STDOUT terraform:  + config_drive = true 2025-05-13 22:37:35.961046 | orchestrator | 22:37:35.960 STDOUT terraform:  + created = (known after apply) 2025-05-13 22:37:35.961086 | orchestrator | 22:37:35.961 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-13 22:37:35.961102 | orchestrator | 22:37:35.961 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-13 22:37:35.961139 | orchestrator | 22:37:35.961 STDOUT terraform:  + force_delete = false 2025-05-13 22:37:35.961190 | orchestrator | 22:37:35.961 STDOUT terraform:  + id = (known after apply) 2025-05-13 22:37:35.961230 | orchestrator | 22:37:35.961 STDOUT terraform:  + image_id = (known after apply) 2025-05-13 22:37:35.961280 | orchestrator | 22:37:35.961 STDOUT terraform:  + image_name = (known after apply) 2025-05-13 22:37:35.961305 | orchestrator | 22:37:35.961 STDOUT terraform:  + key_pair = "testbed" 2025-05-13 22:37:35.961319 | orchestrator | 22:37:35.961 STDOUT terraform:  + name = "testbed-node-3" 2025-05-13 22:37:35.961343 | orchestrator | 22:37:35.961 STDOUT terraform:  + power_state = "active" 2025-05-13 22:37:35.961501 | orchestrator | 22:37:35.961 STDOUT terraform:  + region = (known after apply) 2025-05-13 22:37:35.961546 | orchestrator | 22:37:35.961 STDOUT terraform:  + security_groups = (known after apply) 2025-05-13 22:37:35.961554 | orchestrator | 22:37:35.961 STDOUT terraform:  + stop_before_destroy = false 2025-05-13 22:37:35.961569 | orchestrator | 22:37:35.961 STDOUT terraform:  + updated = (known after apply) 2025-05-13 22:37:35.961576 | orchestrator | 22:37:35.961 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-13 22:37:35.961594 | orchestrator | 22:37:35.961 STDOUT terraform:  + block_device { 2025-05-13 22:37:35.961604 | orchestrator | 22:37:35.961 STDOUT terraform:  + boot_index = 0 2025-05-13 22:37:35.961612 | orchestrator | 22:37:35.961 STDOUT terraform:  + delete_on_termination = false 2025-05-13 22:37:35.961653 | orchestrator | 22:37:35.961 STDOUT terraform:  + destination_type = "volume" 2025-05-13 22:37:35.961679 | orchestrator | 22:37:35.961 STDOUT terraform:  + multiattach = false 2025-05-13 22:37:35.961713 | orchestrator | 22:37:35.961 STDOUT terraform:  + source_type = "volume" 2025-05-13 22:37:35.961754 | orchestrator | 22:37:35.961 STDOUT terraform:  + uuid = (known after apply) 2025-05-13 22:37:35.961765 | orchestrator | 22:37:35.961 STDOUT terraform:  } 2025-05-13 22:37:35.961773 | orchestrator | 22:37:35.961 STDOUT terraform:  + network { 2025-05-13 22:37:35.961797 | orchestrator | 22:37:35.961 STDOUT terraform:  + access_network = false 2025-05-13 22:37:35.961833 | orchestrator | 22:37:35.961 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-13 22:37:35.961873 | orchestrator | 22:37:35.961 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-13 22:37:35.961892 | orchestrator | 22:37:35.961 STDOUT terraform:  + mac = (known after apply) 2025-05-13 22:37:35.961926 | orchestrator | 22:37:35.961 STDOUT terraform:  + name = (known after apply) 2025-05-13 22:37:35.961942 | orchestrator | 22:37:35.961 STDOUT terraform:  + port = (known after apply) 2025-05-13 22:37:35.961984 | orchestrator | 22:37:35.961 STDOUT terraform:  + uuid = (known after apply) 2025-05-13 22:37:35.961994 | orchestrator | 22:37:35.961 STDOUT terraform:  } 2025-05-13 22:37:35.962003 | orchestrator | 22:37:35.961 STDOUT terraform:  } 2025-05-13 22:37:35.962079 | orchestrator | 22:37:35.962 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-05-13 22:37:35.962124 | orchestrator | 22:37:35.962 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-13 22:37:35.962162 | orchestrator | 22:37:35.962 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-13 22:37:35.962199 | orchestrator | 22:37:35.962 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-13 22:37:35.962243 | orchestrator | 22:37:35.962 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-13 22:37:35.962268 | orchestrator | 22:37:35.962 STDOUT terraform:  + all_tags = (known after apply) 2025-05-13 22:37:35.962292 | orchestrator | 22:37:35.962 STDOUT terraform:  + availability_zone = "nova" 2025-05-13 22:37:35.962302 | orchestrator | 22:37:35.962 STDOUT terraform:  + config_drive = true 2025-05-13 22:37:35.962347 | orchestrator | 22:37:35.962 STDOUT terraform:  + created = (known after apply) 2025-05-13 22:37:35.962386 | orchestrator | 22:37:35.962 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-13 22:37:35.962418 | orchestrator | 22:37:35.962 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-13 22:37:35.962443 | orchestrator | 22:37:35.962 STDOUT terraform:  + force_delete = false 2025-05-13 22:37:35.962480 | orchestrator | 22:37:35.962 STDOUT terraform:  + id = (known after apply) 2025-05-13 22:37:35.962518 | orchestrator | 22:37:35.962 STDOUT terraform:  + image_id = (known after apply) 2025-05-13 22:37:35.962557 | orchestrator | 22:37:35.962 STDOUT terraform:  + image_name = (known after apply) 2025-05-13 22:37:35.962581 | orchestrator | 22:37:35.962 STDOUT terraform:  + key_pair = "testbed" 2025-05-13 22:37:35.962615 | orchestrator | 22:37:35.962 STDOUT terraform:  + name = "testbed-node-4" 2025-05-13 22:37:35.962639 | orchestrator | 22:37:35.962 STDOUT terraform:  + power_state = "active" 2025-05-13 22:37:35.962680 | orchestrator | 22:37:35.962 STDOUT terraform:  + region = (known after apply) 2025-05-13 22:37:35.962715 | orchestrator | 22:37:35.962 STDOUT terraform:  + security_groups = (known after apply) 2025-05-13 22:37:35.962738 | orchestrator | 22:37:35.962 STDOUT terraform:  + stop_before_destroy = false 2025-05-13 22:37:35.962791 | orchestrator | 22:37:35.962 STDOUT terraform:  + updated = (known after apply) 2025-05-13 22:37:35.962832 | orchestrator | 22:37:35.962 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-13 22:37:35.962850 | orchestrator | 22:37:35.962 STDOUT terraform:  + block_device { 2025-05-13 22:37:35.962864 | orchestrator | 22:37:35.962 STDOUT terraform:  + boot_index = 0 2025-05-13 22:37:35.962877 | orchestrator | 22:37:35.962 STDOUT terraform:  + delete_on_termination = false 2025-05-13 22:37:35.962927 | orchestrator | 22:37:35.962 STDOUT terraform:  + destination_type = "volume" 2025-05-13 22:37:35.962943 | orchestrator | 22:37:35.962 STDOUT terraform:  + multiattach = false 2025-05-13 22:37:35.962994 | orchestrator | 22:37:35.962 STDOUT terraform:  + source_type = "volume" 2025-05-13 22:37:35.963028 | orchestrator | 22:37:35.962 STDOUT terraform:  + uuid = (known after apply) 2025-05-13 22:37:35.963037 | orchestrator | 22:37:35.963 STDOUT terraform:  } 2025-05-13 22:37:35.963046 | orchestrator | 22:37:35.963 STDOUT terraform:  + network { 2025-05-13 22:37:35.963069 | orchestrator | 22:37:35.963 STDOUT terraform:  + access_network = false 2025-05-13 22:37:35.963102 | orchestrator | 22:37:35.963 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-13 22:37:35.963145 | orchestrator | 22:37:35.963 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-13 22:37:35.963175 | orchestrator | 22:37:35.963 STDOUT terraform:  + mac = (known after apply) 2025-05-13 22:37:35.963194 | orchestrator | 22:37:35.963 STDOUT terraform:  + name = (known after apply) 2025-05-13 22:37:35.963237 | orchestrator | 22:37:35.963 STDOUT terraform:  + port = (known after apply) 2025-05-13 22:37:35.963262 | orchestrator | 22:37:35.963 STDOUT terraform:  + uuid = (known after apply) 2025-05-13 22:37:35.963271 | orchestrator | 22:37:35.963 STDOUT terraform:  } 2025-05-13 22:37:35.963280 | orchestrator | 22:37:35.963 STDOUT terraform:  } 2025-05-13 22:37:35.963388 | orchestrator | 22:37:35.963 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-05-13 22:37:35.963434 | orchestrator | 22:37:35.963 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-13 22:37:35.963477 | orchestrator | 22:37:35.963 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-13 22:37:35.963513 | orchestrator | 22:37:35.963 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-13 22:37:35.963558 | orchestrator | 22:37:35.963 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-13 22:37:35.963590 | orchestrator | 22:37:35.963 STDOUT terraform:  + all_tags = (known after apply) 2025-05-13 22:37:35.963603 | orchestrator | 22:37:35.963 STDOUT terraform:  + availability_zone = "nova" 2025-05-13 22:37:35.963642 | orchestrator | 22:37:35.963 STDOUT terraform:  + config_drive = true 2025-05-13 22:37:35.963658 | orchestrator | 22:37:35.963 STDOUT terraform:  + created = (known after apply) 2025-05-13 22:37:35.963705 | orchestrator | 22:37:35.963 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-13 22:37:35.963721 | orchestrator | 22:37:35.963 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-13 22:37:35.963756 | orchestrator | 22:37:35.963 STDOUT terraform:  + force_delete = false 2025-05-13 22:37:35.963782 | orchestrator | 22:37:35.963 STDOUT terraform:  + id = (known after apply) 2025-05-13 22:37:35.963824 | orchestrator | 22:37:35.963 STDOUT terraform:  + image_id = (known after apply) 2025-05-13 22:37:35.963861 | orchestrator | 22:37:35.963 STDOUT terraform:  + image_name = (known after apply) 2025-05-13 22:37:35.963886 | orchestrator | 22:37:35.963 STDOUT terraform:  + key_pair = "testbed" 2025-05-13 22:37:35.963918 | orchestrator | 22:37:35.963 STDOUT terraform:  + name = "testbed-node-5" 2025-05-13 22:37:35.963943 | orchestrator | 22:37:35.963 STDOUT terraform:  + power_state = "active" 2025-05-13 22:37:35.964089 | orchestrator | 22:37:35.963 STDOUT terraform:  + region = (known after apply) 2025-05-13 22:37:35.964125 | orchestrator | 22:37:35.963 STDOUT terraform:  + security_groups = (known after apply) 2025-05-13 22:37:35.964141 | orchestrator | 22:37:35.964 STDOUT terraform:  + stop_before_destroy = false 2025-05-13 22:37:35.964162 | orchestrator | 22:37:35.964 STDOUT terraform:  + updated = (known after apply) 2025-05-13 22:37:35.964175 | orchestrator | 22:37:35.964 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-13 22:37:35.964197 | orchestrator | 22:37:35.964 STDOUT terraform:  + block_device { 2025-05-13 22:37:35.964211 | orchestrator | 22:37:35.964 STDOUT terraform:  + boot_index = 0 2025-05-13 22:37:35.964222 | orchestrator | 22:37:35.964 STDOUT terraform:  + delete_on_termination = false 2025-05-13 22:37:35.964236 | orchestrator | 22:37:35.964 STDOUT terraform:  + destination_type = "volume" 2025-05-13 22:37:35.964287 | orchestrator | 22:37:35.964 STDOUT terraform:  + multiattach = false 2025-05-13 22:37:35.964301 | orchestrator | 22:37:35.964 STDOUT terraform:  + source_type = "volume" 2025-05-13 22:37:35.964344 | orchestrator | 22:37:35.964 STDOUT terraform:  + uuid = (known after apply) 2025-05-13 22:37:35.964357 | orchestrator | 22:37:35.964 STDOUT terraform:  } 2025-05-13 22:37:35.964372 | orchestrator | 22:37:35.964 STDOUT terraform:  + network { 2025-05-13 22:37:35.964383 | orchestrator | 22:37:35.964 STDOUT terraform:  + access_network = false 2025-05-13 22:37:35.964432 | orchestrator | 22:37:35.964 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-13 22:37:35.964445 | orchestrator | 22:37:35.964 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-13 22:37:35.964459 | orchestrator | 22:37:35.964 STDOUT terraform:  + mac = (known after apply) 2025-05-13 22:37:35.964506 | orchestrator | 22:37:35.964 STDOUT terraform:  + name = (known after apply) 2025-05-13 22:37:35.964521 | orchestrator | 22:37:35.964 STDOUT terraform:  + port = (known after apply) 2025-05-13 22:37:35.964559 | orchestrator | 22:37:35.964 STDOUT terraform:  + uuid = (known after apply) 2025-05-13 22:37:35.964572 | orchestrator | 22:37:35.964 STDOUT terraform:  } 2025-05-13 22:37:35.964594 | orchestrator | 22:37:35.964 STDOUT terraform:  } 2025-05-13 22:37:35.964609 | orchestrator | 22:37:35.964 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-05-13 22:37:35.964648 | orchestrator | 22:37:35.964 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-05-13 22:37:35.964664 | orchestrator | 22:37:35.964 STDOUT terraform:  + fingerprint = (known after apply) 2025-05-13 22:37:35.964679 | orchestrator | 22:37:35.964 STDOUT terraform:  + id = (known after apply) 2025-05-13 22:37:35.964717 | orchestrator | 22:37:35.964 STDOUT terraform:  + name = "testbed" 2025-05-13 22:37:35.964732 | orchestrator | 22:37:35.964 STDOUT terraform:  + private_key = (sensitive value) 2025-05-13 22:37:35.964747 | orchestrator | 22:37:35.964 STDOUT terraform:  + public_key = (known after apply) 2025-05-13 22:37:35.964784 | orchestrator | 22:37:35.964 STDOUT terraform:  + region = (known after apply) 2025-05-13 22:37:35.964800 | orchestrator | 22:37:35.964 STDOUT terraform:  + user_id = (known after apply) 2025-05-13 22:37:35.964814 | orchestrator | 22:37:35.964 STDOUT terraform:  } 2025-05-13 22:37:35.964870 | orchestrator | 22:37:35.964 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-05-13 22:37:35.964921 | orchestrator | 22:37:35.964 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-13 22:37:35.964970 | orchestrator | 22:37:35.964 STDOUT terraform:  + device = (known after apply) 2025-05-13 22:37:35.964983 | orchestrator | 22:37:35.964 STDOUT terraform:  + id = (known after apply) 2025-05-13 22:37:35.965008 | orchestrator | 22:37:35.964 STDOUT terraform:  + instance_id = (known after apply) 2025-05-13 22:37:35.965063 | orchestrator | 22:37:35.965 STDOUT terraform:  + region = (known after apply) 2025-05-13 22:37:35.965103 | orchestrator | 22:37:35.965 STDOUT terraform:  + volume_id = (known after apply) 2025-05-13 22:37:35.965115 | orchestrator | 22:37:35.965 STDOUT terraform:  } 2025-05-13 22:37:35.965188 | orchestrator | 22:37:35.965 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-05-13 22:37:35.965239 | orchestrator | 22:37:35.965 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-13 22:37:35.965255 | orchestrator | 22:37:35.965 STDOUT terraform:  + device = (known after apply) 2025-05-13 22:37:35.965292 | orchestrator | 22:37:35.965 STDOUT terraform:  + id = (known after apply) 2025-05-13 22:37:35.965308 | orchestrator | 22:37:35.965 STDOUT terraform:  + instance_id = (known after apply) 2025-05-13 22:37:35.965322 | orchestrator | 22:37:35.965 STDOUT terraform:  + region = (known after apply) 2025-05-13 22:37:35.965370 | orchestrator | 22:37:35.965 STDOUT terraform:  + volume_id = (known after apply) 2025-05-13 22:37:35.965382 | orchestrator | 22:37:35.965 STDOUT terraform:  } 2025-05-13 22:37:35.965430 | orchestrator | 22:37:35.965 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-05-13 22:37:35.965479 | orchestrator | 22:37:35.965 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-13 22:37:35.965494 | orchestrator | 22:37:35.965 STDOUT terraform:  + device = (known after apply) 2025-05-13 22:37:35.965532 | orchestrator | 22:37:35.965 STDOUT terraform:  + id = (known after apply) 2025-05-13 22:37:35.965547 | orchestrator | 22:37:35.965 STDOUT terraform:  + instance_id = (known after apply) 2025-05-13 22:37:35.965595 | orchestrator | 22:37:35.965 STDOUT terraform:  + region = (known after apply) 2025-05-13 22:37:35.965608 | orchestrator | 22:37:35.965 STDOUT terraform:  + volume_id = (known after apply) 2025-05-13 22:37:35.965622 | orchestrator | 22:37:35.965 STDOUT terraform:  } 2025-05-13 22:37:35.965659 | orchestrator | 22:37:35.965 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-05-13 22:37:35.965712 | orchestrator | 22:37:35.965 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-13 22:37:35.965728 | orchestrator | 22:37:35.965 STDOUT terraform:  + device = (known after apply) 2025-05-13 22:37:35.965777 | orchestrator | 22:37:35.965 STDOUT terraform:  + id = (known after apply) 2025-05-13 22:37:35.965789 | orchestrator | 22:37:35.965 STDOUT terraform:  + instance_id = (known after apply) 2025-05-13 22:37:35.965803 | orchestrator | 22:37:35.965 STDOUT terraform:  + region = (known after apply) 2025-05-13 22:37:35.965848 | orchestrator | 22:37:35.965 STDOUT terraform:  + volume_id = (known after apply) 2025-05-13 22:37:35.965860 | orchestrator | 22:37:35.965 STDOUT terraform:  } 2025-05-13 22:37:35.965883 | orchestrator | 22:37:35.965 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-05-13 22:37:35.965945 | orchestrator | 22:37:35.965 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-13 22:37:35.965979 | orchestrator | 22:37:35.965 STDOUT terraform:  + device = (known after apply) 2025-05-13 22:37:35.966041 | orchestrator | 22:37:35.965 STDOUT terraform:  + id = (known after apply) 2025-05-13 22:37:35.966060 | orchestrator | 22:37:35.966 STDOUT terraform:  + instance_id = (known after apply) 2025-05-13 22:37:35.966074 | orchestrator | 22:37:35.966 STDOUT terraform:  + region = (known after apply) 2025-05-13 22:37:35.966113 | orchestrator | 22:37:35.966 STDOUT terraform:  + volume_id = (known after apply) 2025-05-13 22:37:35.966125 | orchestrator | 22:37:35.966 STDOUT terraform:  } 2025-05-13 22:37:35.966162 | orchestrator | 22:37:35.966 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-05-13 22:37:35.966217 | orchestrator | 22:37:35.966 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-13 22:37:35.966233 | orchestrator | 22:37:35.966 STDOUT terraform:  + device = (known after apply) 2025-05-13 22:37:35.966282 | orchestrator | 22:37:35.966 STDOUT terraform:  + id = (known after apply) 2025-05-13 22:37:35.966295 | orchestrator | 22:37:35.966 STDOUT terraform:  + instance_id = (known after apply) 2025-05-13 22:37:35.966309 | orchestrator | 22:37:35.966 STDOUT terraform:  + region = (known after apply) 2025-05-13 22:37:35.966346 | orchestrator | 22:37:35.966 STDOUT terraform:  + volume_id = (known after apply) 2025-05-13 22:37:35.966359 | orchestrator | 22:37:35.966 STDOUT terraform:  } 2025-05-13 22:37:35.966407 | orchestrator | 22:37:35.966 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-05-13 22:37:35.966455 | orchestrator | 22:37:35.966 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-13 22:37:35.966471 | orchestrator | 22:37:35.966 STDOUT terraform:  + device = (known after apply) 2025-05-13 22:37:35.966509 | orchestrator | 22:37:35.966 STDOUT terraform:  + id = (known after apply) 2025-05-13 22:37:35.966524 | orchestrator | 22:37:35.966 STDOUT terraform:  + instance_id = (known after apply) 2025-05-13 22:37:35.966572 | orchestrator | 22:37:35.966 STDOUT terraform:  + region = (known after apply) 2025-05-13 22:37:35.966585 | orchestrator | 22:37:35.966 STDOUT terraform:  + volume_id = (known after apply) 2025-05-13 22:37:35.966599 | orchestrator | 22:37:35.966 STDOUT terraform:  } 2025-05-13 22:37:35.966647 | orchestrator | 22:37:35.966 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-05-13 22:37:35.966696 | orchestrator | 22:37:35.966 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-13 22:37:35.966712 | orchestrator | 22:37:35.966 STDOUT terraform:  + device = (known after apply) 2025-05-13 22:37:35.966760 | orchestrator | 22:37:35.966 STDOUT terraform:  + id = (known after apply) 2025-05-13 22:37:35.966772 | orchestrator | 22:37:35.966 STDOUT terraform:  + instance_id = (known after apply) 2025-05-13 22:37:35.966794 | orchestrator | 22:37:35.966 STDOUT terraform:  + region = (known after apply) 2025-05-13 22:37:35.966809 | orchestrator | 22:37:35.966 STDOUT terraform:  + volume_id = (known after apply) 2025-05-13 22:37:35.966823 | orchestrator | 22:37:35.966 STDOUT terraform:  } 2025-05-13 22:37:35.966885 | orchestrator | 22:37:35.966 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-05-13 22:37:35.966927 | orchestrator | 22:37:35.966 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-13 22:37:35.966942 | orchestrator | 22:37:35.966 STDOUT terraform:  + device = (known after apply) 2025-05-13 22:37:35.966993 | orchestrator | 22:37:35.966 STDOUT terraform:  + id = (known after apply) 2025-05-13 22:37:35.967006 | orchestrator | 22:37:35.966 STDOUT terraform:  + instance_id = (known after apply) 2025-05-13 22:37:35.967020 | orchestrator | 22:37:35.966 STDOUT terraform:  + region = (known after apply) 2025-05-13 22:37:35.967058 | orchestrator | 22:37:35.967 STDOUT terraform:  + volume_id = (known after apply) 2025-05-13 22:37:35.967071 | orchestrator | 22:37:35.967 STDOUT terraform:  } 2025-05-13 22:37:35.967122 | orchestrator | 22:37:35.967 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-05-13 22:37:35.967259 | orchestrator | 22:37:35.967 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-05-13 22:37:35.967291 | orchestrator | 22:37:35.967 STDOUT terraform:  + fixed_ip = (known after apply) 2025-05-13 22:37:35.967298 | orchestrator | 22:37:35.967 STDOUT terraform:  + floating_ip = (known after apply) 2025-05-13 22:37:35.967308 | orchestrator | 22:37:35.967 STDOUT terraform:  + id = (known after apply) 2025-05-13 22:37:35.967314 | orchestrator | 22:37:35.967 STDOUT terraform:  + port_id = (known after apply) 2025-05-13 22:37:35.967319 | orchestrator | 22:37:35.967 STDOUT terraform:  + region = (known after apply) 2025-05-13 22:37:35.967326 | orchestrator | 22:37:35.967 STDOUT terraform:  } 2025-05-13 22:37:35.967359 | orchestrator | 22:37:35.967 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-05-13 22:37:35.967400 | orchestrator | 22:37:35.967 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-05-13 22:37:35.967422 | orchestrator | 22:37:35.967 STDOUT terraform:  + address = (known after apply) 2025-05-13 22:37:35.967444 | orchestrator | 22:37:35.967 STDOUT terraform:  + all_tags = (known after apply) 2025-05-13 22:37:35.967465 | orchestrator | 22:37:35.967 STDOUT terraform:  + dns_domain = (known after apply) 2025-05-13 22:37:35.967487 | orchestrator | 22:37:35.967 STDOUT terraform:  + dns_name = (known after apply) 2025-05-13 22:37:35.967508 | orchestrator | 22:37:35.967 STDOUT terraform:  + fixed_ip = (known after apply) 2025-05-13 22:37:35.967530 | orchestrator | 22:37:35.967 STDOUT terraform:  + id = (known after apply) 2025-05-13 22:37:35.967553 | orchestrator | 22:37:35.967 STDOUT terraform:  + pool = "public" 2025-05-13 22:37:35.967564 | orchestrator | 22:37:35.967 STDOUT terraform:  + port_id = (known after apply) 2025-05-13 22:37:35.967598 | orchestrator | 22:37:35.967 STDOUT terraform:  + region = (known after apply) 2025-05-13 22:37:35.967607 | orchestrator | 22:37:35.967 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-13 22:37:35.967640 | orchestrator | 22:37:35.967 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-13 22:37:35.967649 | orchestrator | 22:37:35.967 STDOUT terraform:  } 2025-05-13 22:37:35.967694 | orchestrator | 22:37:35.967 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-05-13 22:37:35.967738 | orchestrator | 22:37:35.967 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-05-13 22:37:35.967775 | orchestrator | 22:37:35.967 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-13 22:37:35.967811 | orchestrator | 22:37:35.967 STDOUT terraform:  + all_tags = (known after apply) 2025-05-13 22:37:35.967834 | orchestrator | 22:37:35.967 STDOUT terraform:  + availability_zone_hints = [ 2025-05-13 22:37:35.967843 | orchestrator | 22:37:35.967 STDOUT terraform:  + "nova", 2025-05-13 22:37:35.967850 | orchestrator | 22:37:35.967 STDOUT terraform:  ] 2025-05-13 22:37:35.967892 | orchestrator | 22:37:35.967 STDOUT terraform:  + dns_domain = (known after apply) 2025-05-13 22:37:35.967929 | orchestrator | 22:37:35.967 STDOUT terraform:  + external = (known after apply) 2025-05-13 22:37:35.967991 | orchestrator | 22:37:35.967 STDOUT terraform:  + id = (known after apply) 2025-05-13 22:37:35.968025 | orchestrator | 22:37:35.967 STDOUT terraform:  + mtu = (known after apply) 2025-05-13 22:37:35.968064 | orchestrator | 22:37:35.968 STDOUT terraform:  + name = "net-testbed-management" 2025-05-13 22:37:35.968099 | orchestrator | 22:37:35.968 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-13 22:37:35.968135 | orchestrator | 22:37:35.968 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-13 22:37:35.968173 | orchestrator | 22:37:35.968 STDOUT terraform:  + region = (known after apply) 2025-05-13 22:37:35.968209 | orchestrator | 22:37:35.968 STDOUT terraform:  + shared = (known after apply) 2025-05-13 22:37:35.968247 | orchestrator | 22:37:35.968 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-13 22:37:35.968289 | orchestrator | 22:37:35.968 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-05-13 22:37:35.968299 | orchestrator | 22:37:35.968 STDOUT terraform:  + segments (known after apply) 2025-05-13 22:37:35.968307 | orchestrator | 22:37:35.968 STDOUT terraform:  } 2025-05-13 22:37:35.968359 | orchestrator | 22:37:35.968 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-05-13 22:37:35.968404 | orchestrator | 22:37:35.968 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-05-13 22:37:35.968440 | orchestrator | 22:37:35.968 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-13 22:37:35.968476 | orchestrator | 22:37:35.968 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-13 22:37:35.968511 | orchestrator | 22:37:35.968 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-13 22:37:35.968549 | orchestrator | 22:37:35.968 STDOUT terraform:  + all_tags = (known after apply) 2025-05-13 22:37:35.968584 | orchestrator | 22:37:35.968 STDOUT terraform:  + device_id = (known after apply) 2025-05-13 22:37:35.968621 | orchestrator | 22:37:35.968 STDOUT terraform:  + device_owner = (known after apply) 2025-05-13 22:37:35.968658 | orchestrator | 22:37:35.968 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-13 22:37:35.968694 | orchestrator | 22:37:35.968 STDOUT terraform:  + dns_name = (known after apply) 2025-05-13 22:37:35.968732 | orchestrator | 22:37:35.968 STDOUT terraform:  + id = (known after apply) 2025-05-13 22:37:35.968768 | orchestrator | 22:37:35.968 STDOUT terraform:  + mac_address = (known after apply) 2025-05-13 22:37:35.968806 | orchestrator | 22:37:35.968 STDOUT terraform:  + network_id = (known after apply) 2025-05-13 22:37:35.968842 | orchestrator | 22:37:35.968 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-13 22:37:35.968878 | orchestrator | 22:37:35.968 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-13 22:37:35.968915 | orchestrator | 22:37:35.968 STDOUT terraform:  + region = (known after apply) 2025-05-13 22:37:35.968951 | orchestrator | 22:37:35.968 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-13 22:37:35.968997 | orchestrator | 22:37:35.968 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-13 22:37:35.969006 | orchestrator | 22:37:35.968 STDOUT terraform:  + allowed_address_pairs { 2025-05-13 22:37:35.969043 | orchestrator | 22:37:35.969 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-13 22:37:35.969052 | orchestrator | 22:37:35.969 STDOUT terraform:  } 2025-05-13 22:37:35.969061 | orchestrator | 22:37:35.969 STDOUT terraform:  + allowed_address_pairs { 2025-05-13 22:37:35.969098 | orchestrator | 22:37:35.969 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-13 22:37:35.969107 | orchestrator | 22:37:35.969 STDOUT terraform:  } 2025-05-13 22:37:35.969132 | orchestrator | 22:37:35.969 STDOUT terraform:  + binding (known after apply) 2025-05-13 22:37:35.969141 | orchestrator | 22:37:35.969 STDOUT terraform:  + fixed_ip { 2025-05-13 22:37:35.969171 | orchestrator | 22:37:35.969 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-05-13 22:37:35.969193 | orchestrator | 22:37:35.969 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-13 22:37:35.969200 | orchestrator | 22:37:35.969 STDOUT terraform:  } 2025-05-13 22:37:35.969208 | orchestrator | 22:37:35.969 STDOUT terraform:  } 2025-05-13 22:37:35.969259 | orchestrator | 22:37:35.969 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-05-13 22:37:35.969336 | orchestrator | 22:37:35.969 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-13 22:37:35.969373 | orchestrator | 22:37:35.969 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-13 22:37:35.969410 | orchestrator | 22:37:35.969 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-13 22:37:35.969445 | orchestrator | 22:37:35.969 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-13 22:37:35.969480 | orchestrator | 22:37:35.969 STDOUT terraform:  + all_tags = (known after apply) 2025-05-13 22:37:35.969518 | orchestrator | 22:37:35.969 STDOUT terraform:  + device_id = (known after apply) 2025-05-13 22:37:35.969553 | orchestrator | 22:37:35.969 STDOUT terraform:  + device_owner = (known after apply) 2025-05-13 22:37:35.969589 | orchestrator | 22:37:35.969 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-13 22:37:35.969625 | orchestrator | 22:37:35.969 STDOUT terraform:  + dns_name = (known after apply) 2025-05-13 22:37:35.969663 | orchestrator | 22:37:35.969 STDOUT terraform:  + id = (known after apply) 2025-05-13 22:37:35.969699 | orchestrator | 22:37:35.969 STDOUT terraform:  + mac_address = (known after apply) 2025-05-13 22:37:35.969735 | orchestrator | 22:37:35.969 STDOUT terraform:  + network_id = (known after apply) 2025-05-13 22:37:35.969772 | orchestrator | 22:37:35.969 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-13 22:37:35.969808 | orchestrator | 22:37:35.969 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-13 22:37:35.969844 | orchestrator | 22:37:35.969 STDOUT terraform:  + region = (known after apply) 2025-05-13 22:37:35.969879 | orchestrator | 22:37:35.969 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-13 22:37:35.969916 | orchestrator | 22:37:35.969 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-13 22:37:35.969925 | orchestrator | 22:37:35.969 STDOUT terraform:  + allowed_address_pairs { 2025-05-13 22:37:35.969972 | orchestrator | 22:37:35.969 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-13 22:37:35.969981 | orchestrator | 22:37:35.969 STDOUT terraform:  } 2025-05-13 22:37:35.970010 | orchestrator | 22:37:35.969 STDOUT terraform:  + allowed_address_pairs { 2025-05-13 22:37:35.970049 | orchestrator | 22:37:35.969 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-13 22:37:35.970055 | orchestrator | 22:37:35.970 STDOUT terraform:  } 2025-05-13 22:37:35.970063 | orchestrator | 22:37:35.970 STDOUT terraform:  + allowed_address_pairs { 2025-05-13 22:37:35.970098 | orchestrator | 22:37:35.970 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-13 22:37:35.970108 | orchestrator | 22:37:35.970 STDOUT terraform:  } 2025-05-13 22:37:35.970130 | orchestrator | 22:37:35.970 STDOUT terraform:  + allowed_address_pairs { 2025-05-13 22:37:35.970152 | orchestrator | 22:37:35.970 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-13 22:37:35.970161 | orchestrator | 22:37:35.970 STDOUT terraform:  } 2025-05-13 22:37:35.970202 | orchestrator | 22:37:35.970 STDOUT terraform:  + binding (known after apply) 2025-05-13 22:37:35.970225 | orchestrator | 22:37:35.970 STDOUT terraform:  + fixed_ip { 2025-05-13 22:37:35.970256 | orchestrator | 22:37:35.970 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-05-13 22:37:35.970286 | orchestrator | 22:37:35.970 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-13 22:37:35.970295 | orchestrator | 22:37:35.970 STDOUT terraform:  } 2025-05-13 22:37:35.970303 | orchestrator | 22:37:35.970 STDOUT terraform:  } 2025-05-13 22:37:35.970354 | orchestrator | 22:37:35.970 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-05-13 22:37:35.970400 | orchestrator | 22:37:35.970 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-13 22:37:35.970436 | orchestrator | 22:37:35.970 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-13 22:37:35.970472 | orchestrator | 22:37:35.970 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-13 22:37:35.970506 | orchestrator | 22:37:35.970 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-13 22:37:35.970550 | orchestrator | 22:37:35.970 STDOUT terraform:  + all_tags = (known after apply) 2025-05-13 22:37:35.970587 | orchestrator | 22:37:35.970 STDOUT terraform:  + device_id = (known after apply) 2025-05-13 22:37:35.970624 | orchestrator | 22:37:35.970 STDOUT terraform:  + device_owner = (known after apply) 2025-05-13 22:37:35.970661 | orchestrator | 22:37:35.970 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-13 22:37:35.970696 | orchestrator | 22:37:35.970 STDOUT terraform:  + dns_name = (known after apply) 2025-05-13 22:37:35.970734 | orchestrator | 22:37:35.970 STDOUT terraform:  + id = (known after apply) 2025-05-13 22:37:35.970770 | orchestrator | 22:37:35.970 STDOUT terraform:  + mac_address = (known after apply) 2025-05-13 22:37:35.970806 | orchestrator | 22:37:35.970 STDOUT terraform:  + network_id = (known after apply) 2025-05-13 22:37:35.970841 | orchestrator | 22:37:35.970 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-13 22:37:35.970878 | orchestrator | 22:37:35.970 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-13 22:37:35.970915 | orchestrator | 22:37:35.970 STDOUT terraform:  + region = (known after apply) 2025-05-13 22:37:35.970950 | orchestrator | 22:37:35.970 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-13 22:37:35.971089 | orchestrator | 22:37:35.970 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-13 22:37:35.971125 | orchestrator | 22:37:35.971 STDOUT terraform:  + allowed_address_pairs { 2025-05-13 22:37:35.971139 | orchestrator | 22:37:35.971 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-13 22:37:35.971150 | orchestrator | 22:37:35.971 STDOUT terraform:  } 2025-05-13 22:37:35.971166 | orchestrator | 22:37:35.971 STDOUT terraform:  + allowed_address_pairs { 2025-05-13 22:37:35.971176 | orchestrator | 22:37:35.971 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-13 22:37:35.971186 | orchestrator | 22:37:35.971 STDOUT terraform:  } 2025-05-13 22:37:35.971196 | orchestrator | 22:37:35.971 STDOUT terraform:  + allowed_address_pairs { 2025-05-13 22:37:35.971215 | orchestrator | 22:37:35.971 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-13 22:37:35.971225 | orchestrator | 22:37:35.971 STDOUT terraform:  } 2025-05-13 22:37:35.971238 | orchestrator | 22:37:35.971 STDOUT terraform:  + allowed_address_pairs { 2025-05-13 22:37:35.971248 | orchestrator | 22:37:35.971 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-13 22:37:35.971258 | orchestrator | 22:37:35.971 STDOUT terraform:  } 2025-05-13 22:37:35.971268 | orchestrator | 22:37:35.971 STDOUT terraform:  + binding (known after apply) 2025-05-13 22:37:35.971292 | orchestrator | 22:37:35.971 STDOUT terraform:  + fixed_ip { 2025-05-13 22:37:35.971303 | orchestrator | 22:37:35.971 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-05-13 22:37:35.971314 | orchestrator | 22:37:35.971 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-13 22:37:35.971329 | orchestrator | 22:37:35.971 STDOUT terraform:  } 2025-05-13 22:37:35.971340 | orchestrator | 22:37:35.971 STDOUT terraform:  } 2025-05-13 22:37:35.971360 | orchestrator | 22:37:35.971 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-05-13 22:37:35.971416 | orchestrator | 22:37:35.971 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-13 22:37:35.971433 | orchestrator | 22:37:35.971 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-13 22:37:35.971483 | orchestrator | 22:37:35.971 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-13 22:37:35.971500 | orchestrator | 22:37:35.971 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-13 22:37:35.971550 | orchestrator | 22:37:35.971 STDOUT terraform:  + all_tags = (known after apply) 2025-05-13 22:37:35.971566 | orchestrator | 22:37:35.971 STDOUT terraform:  + device_id = (known after apply) 2025-05-13 22:37:35.971618 | orchestrator | 22:37:35.971 STDOUT terraform:  + device_owner = (known after apply) 2025-05-13 22:37:35.971634 | orchestrator | 22:37:35.971 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-13 22:37:35.971686 | orchestrator | 22:37:35.971 STDOUT terraform:  + dns_name = (known after apply) 2025-05-13 22:37:35.971716 | orchestrator | 22:37:35.971 STDOUT terraform:  + id = (known after apply) 2025-05-13 22:37:35.971765 | orchestrator | 22:37:35.971 STDOUT terraform:  + mac_address = (known after apply) 2025-05-13 22:37:35.971780 | orchestrator | 22:37:35.971 STDOUT terraform:  + network_id = (known after apply) 2025-05-13 22:37:35.971830 | orchestrator | 22:37:35.971 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-13 22:37:35.971846 | orchestrator | 22:37:35.971 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-13 22:37:35.971898 | orchestrator | 22:37:35.971 STDOUT terraform:  + region = (known after apply) 2025-05-13 22:37:35.971914 | orchestrator | 22:37:35.971 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-13 22:37:35.971996 | orchestrator | 22:37:35.971 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-13 22:37:35.972011 | orchestrator | 22:37:35.971 STDOUT terraform:  + allowed_address_pairs { 2025-05-13 22:37:35.972026 | orchestrator | 22:37:35.971 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-13 22:37:35.972037 | orchestrator | 22:37:35.972 STDOUT terraform:  } 2025-05-13 22:37:35.972051 | orchestrator | 22:37:35.972 STDOUT terraform:  + allowed_address_pairs { 2025-05-13 22:37:35.972065 | orchestrator | 22:37:35.972 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-13 22:37:35.972076 | orchestrator | 22:37:35.972 STDOUT terraform:  } 2025-05-13 22:37:35.972099 | orchestrator | 22:37:35.972 STDOUT terraform:  + allowed_address_pairs { 2025-05-13 22:37:35.972113 | orchestrator | 22:37:35.972 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-13 22:37:35.972128 | orchestrator | 22:37:35.972 STDOUT terraform:  } 2025-05-13 22:37:35.972141 | orchestrator | 22:37:35.972 STDOUT terraform:  + allowed_address_pairs { 2025-05-13 22:37:35.972182 | orchestrator | 22:37:35.972 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-13 22:37:35.972194 | orchestrator | 22:37:35.972 STDOUT terraform:  } 2025-05-13 22:37:35.972209 | orchestrator | 22:37:35.972 STDOUT terraform:  + binding (known after apply) 2025-05-13 22:37:35.972220 | orchestrator | 22:37:35.972 STDOUT terraform:  + fixed_ip { 2025-05-13 22:37:35.972234 | orchestrator | 22:37:35.972 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-05-13 22:37:35.972261 | orchestrator | 22:37:35.972 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-13 22:37:35.972276 | orchestrator | 22:37:35.972 STDOUT terraform:  } 2025-05-13 22:37:35.972287 | orchestrator | 22:37:35.972 STDOUT terraform:  } 2025-05-13 22:37:35.972341 | orchestrator | 22:37:35.972 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-05-13 22:37:35.972367 | orchestrator | 22:37:35.972 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-13 22:37:35.972417 | orchestrator | 22:37:35.972 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-13 22:37:35.972433 | orchestrator | 22:37:35.972 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-13 22:37:35.972473 | orchestrator | 22:37:35.972 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-13 22:37:35.972523 | orchestrator | 22:37:35.972 STDOUT terraform:  + all_tags = (known after apply) 2025-05-13 22:37:35.972539 | orchestrator | 22:37:35.972 STDOUT terraform:  + device_id = (known after apply) 2025-05-13 22:37:35.972577 | orchestrator | 22:37:35.972 STDOUT terraform:  + device_owner = (known after apply) 2025-05-13 22:37:35.972626 | orchestrator | 22:37:35.972 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-13 22:37:35.972641 | orchestrator | 22:37:35.972 STDOUT terraform:  + dns_name = (known after apply) 2025-05-13 22:37:35.972679 | orchestrator | 22:37:35.972 STDOUT terraform:  + id = (known after apply) 2025-05-13 22:37:35.972706 | orchestrator | 22:37:35.972 STDOUT terraform:  + mac_address = (known after apply) 2025-05-13 22:37:35.972754 | orchestrator | 22:37:35.972 STDOUT terraform:  + network_id = (known after apply) 2025-05-13 22:37:35.972770 | orchestrator | 22:37:35.972 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-13 22:37:35.972820 | orchestrator | 22:37:35.972 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-13 22:37:35.972869 | orchestrator | 22:37:35.972 STDOUT terraform:  + region = (known after apply) 2025-05-13 22:37:35.972885 | orchestrator | 22:37:35.972 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-13 22:37:35.972947 | orchestrator | 22:37:35.972 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-13 22:37:35.973001 | orchestrator | 22:37:35.972 STDOUT terraform:  + allowed_address_pairs { 2025-05-13 22:37:35.973014 | orchestrator | 22:37:35.972 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-13 22:37:35.973028 | orchestrator | 22:37:35.972 STDOUT terraform:  } 2025-05-13 22:37:35.973039 | orchestrator | 22:37:35.973 STDOUT terraform:  + allowed_address_pairs { 2025-05-13 22:37:35.973054 | orchestrator | 22:37:35.973 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-13 22:37:35.973067 | orchestrator | 22:37:35.973 STDOUT terraform:  } 2025-05-13 22:37:35.973107 | orchestrator | 22:37:35.973 STDOUT terraform:  + allowed_address_pairs { 2025-05-13 22:37:35.973123 | orchestrator | 22:37:35.973 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-13 22:37:35.973138 | orchestrator | 22:37:35.973 STDOUT terraform:  } 2025-05-13 22:37:35.973175 | orchestrator | 22:37:35.973 STDOUT terraform:  + allowed_address_pairs { 2025-05-13 22:37:35.973191 | orchestrator | 22:37:35.973 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-13 22:37:35.973205 | orchestrator | 22:37:35.973 STDOUT terraform:  } 2025-05-13 22:37:35.973219 | orchestrator | 22:37:35.973 STDOUT terraform:  + binding (known after apply) 2025-05-13 22:37:35.973233 | orchestrator | 22:37:35.973 STDOUT terraform:  + fixed_ip { 2025-05-13 22:37:35.973281 | orchestrator | 22:37:35.973 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-05-13 22:37:35.973297 | orchestrator | 22:37:35.973 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-13 22:37:35.973312 | orchestrator | 22:37:35.973 STDOUT terraform:  } 2025-05-13 22:37:35.973323 | orchestrator | 22:37:35.973 STDOUT terraform:  } 2025-05-13 22:37:35.973374 | orchestrator | 22:37:35.973 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-05-13 22:37:35.973414 | orchestrator | 22:37:35.973 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-13 22:37:35.973453 | orchestrator | 22:37:35.973 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-13 22:37:35.973510 | orchestrator | 22:37:35.973 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-13 22:37:35.973560 | orchestrator | 22:37:35.973 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-13 22:37:35.973599 | orchestrator | 22:37:35.973 STDOUT terraform:  + all_tags = (known after apply) 2025-05-13 22:37:35.973653 | orchestrator | 22:37:35.973 STDOUT terraform:  + device_id = (known after apply) 2025-05-13 22:37:35.973703 | orchestrator | 22:37:35.973 STDOUT terraform:  + device_owner = (known after apply) 2025-05-13 22:37:35.973719 | orchestrator | 22:37:35.973 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-13 22:37:35.973756 | orchestrator | 22:37:35.973 STDOUT terraform:  + dns_name = (known after apply) 2025-05-13 22:37:35.973809 | orchestrator | 22:37:35.973 STDOUT terraform:  + id = (known after apply) 2025-05-13 22:37:35.973858 | orchestrator | 22:37:35.973 STDOUT terraform:  + mac_address = (known after apply) 2025-05-13 22:37:35.973874 | orchestrator | 22:37:35.973 STDOUT terraform:  + network_id = (known after apply) 2025-05-13 22:37:35.973910 | orchestrator | 22:37:35.973 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-13 22:37:35.973949 | orchestrator | 22:37:35.973 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-13 22:37:35.974000 | orchestrator | 22:37:35.973 STDOUT terraform:  + region = (known after apply) 2025-05-13 22:37:35.974077 | orchestrator | 22:37:35.973 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-13 22:37:35.974097 | orchestrator | 22:37:35.974 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-13 22:37:35.974113 | orchestrator | 22:37:35.974 STDOUT terraform:  + allowed_address_pairs { 2025-05-13 22:37:35.974128 | orchestrator | 22:37:35.974 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-13 22:37:35.974143 | orchestrator | 22:37:35.974 STDOUT terraform:  } 2025-05-13 22:37:35.974158 | orchestrator | 22:37:35.974 STDOUT terraform:  + allowed_address_pairs { 2025-05-13 22:37:35.974197 | orchestrator | 22:37:35.974 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-13 22:37:35.974210 | orchestrator | 22:37:35.974 STDOUT terraform:  } 2025-05-13 22:37:35.974226 | orchestrator | 22:37:35.974 STDOUT terraform:  + allowed_address_pairs { 2025-05-13 22:37:35.974248 | orchestrator | 22:37:35.974 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-13 22:37:35.974263 | orchestrator | 22:37:35.974 STDOUT terraform:  } 2025-05-13 22:37:35.974275 | orchestrator | 22:37:35.974 STDOUT terraform:  + allowed_address_pairs { 2025-05-13 22:37:35.974291 | orchestrator | 22:37:35.974 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-13 22:37:35.974307 | orchestrator | 22:37:35.974 STDOUT terraform:  } 2025-05-13 22:37:35.974323 | orchestrator | 22:37:35.974 STDOUT terraform:  + binding (known after apply) 2025-05-13 22:37:35.974339 | orchestrator | 22:37:35.974 STDOUT terraform:  + fixed_ip { 2025-05-13 22:37:35.974354 | orchestrator | 22:37:35.974 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-05-13 22:37:35.974395 | orchestrator | 22:37:35.974 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-13 22:37:35.974407 | orchestrator | 22:37:35.974 STDOUT terraform:  } 2025-05-13 22:37:35.974424 | orchestrator | 22:37:35.974 STDOUT terraform:  } 2025-05-13 22:37:35.974439 | orchestrator | 22:37:35.974 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-05-13 22:37:35.974564 | orchestrator | 22:37:35.974 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-13 22:37:35.974589 | orchestrator | 22:37:35.974 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-13 22:37:35.974601 | orchestrator | 22:37:35.974 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-13 22:37:35.974608 | orchestrator | 22:37:35.974 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-13 22:37:35.974617 | orchestrator | 22:37:35.974 STDOUT terraform:  + all_tags = (known after apply) 2025-05-13 22:37:35.974659 | orchestrator | 22:37:35.974 STDOUT terraform:  + device_id = (known after apply) 2025-05-13 22:37:35.974695 | orchestrator | 22:37:35.974 STDOUT terraform:  + device_owner = (known after apply) 2025-05-13 22:37:35.974732 | orchestrator | 22:37:35.974 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-13 22:37:35.974764 | orchestrator | 22:37:35.974 STDOUT terraform:  + dns_name = (known after apply) 2025-05-13 22:37:35.974799 | orchestrator | 22:37:35.974 STDOUT terraform:  + id = (known after apply) 2025-05-13 22:37:35.974835 | orchestrator | 22:37:35.974 STDOUT terraform:  + mac_address = (known after apply) 2025-05-13 22:37:35.974871 | orchestrator | 22:37:35.974 STDOUT terraform:  + network_id = (known after apply) 2025-05-13 22:37:35.974906 | orchestrator | 22:37:35.974 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-13 22:37:35.974943 | orchestrator | 22:37:35.974 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-13 22:37:35.974991 | orchestrator | 22:37:35.974 STDOUT terraform:  + region = (known after apply) 2025-05-13 22:37:35.975027 | orchestrator | 22:37:35.974 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-13 22:37:35.975063 | orchestrator | 22:37:35.975 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-13 22:37:35.975073 | orchestrator | 22:37:35.975 STDOUT terraform:  + allowed_address_pairs { 2025-05-13 22:37:35.975109 | orchestrator | 22:37:35.975 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-13 22:37:35.975118 | orchestrator | 22:37:35.975 STDOUT terraform:  } 2025-05-13 22:37:35.975127 | orchestrator | 22:37:35.975 STDOUT terraform:  + allowed_address_pairs { 2025-05-13 22:37:35.975166 | orchestrator | 22:37:35.975 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-13 22:37:35.975175 | orchestrator | 22:37:35.975 STDOUT terraform:  } 2025-05-13 22:37:35.975196 | orchestrator | 22:37:35.975 STDOUT terraform:  + allowed_address_pairs { 2025-05-13 22:37:35.975219 | orchestrator | 22:37:35.975 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-13 22:37:35.975227 | orchestrator | 22:37:35.975 STDOUT terraform:  } 2025-05-13 22:37:35.975256 | orchestrator | 22:37:35.975 STDOUT terraform:  + allowed_address_pairs { 2025-05-13 22:37:35.975285 | orchestrator | 22:37:35.975 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-13 22:37:35.975292 | orchestrator | 22:37:35.975 STDOUT terraform:  } 2025-05-13 22:37:35.975301 | orchestrator | 22:37:35.975 STDOUT terraform:  + binding (known after apply) 2025-05-13 22:37:35.975323 | orchestrator | 22:37:35.975 STDOUT terraform:  + fixed_ip { 2025-05-13 22:37:35.975353 | orchestrator | 22:37:35.975 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-05-13 22:37:35.975383 | orchestrator | 22:37:35.975 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-13 22:37:35.975389 | orchestrator | 22:37:35.975 STDOUT terraform:  } 2025-05-13 22:37:35.975397 | orchestrator | 22:37:35.975 STDOUT terraform:  } 2025-05-13 22:37:35.975445 | orchestrator | 22:37:35.975 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-05-13 22:37:35.975492 | orchestrator | 22:37:35.975 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-05-13 22:37:35.975501 | orchestrator | 22:37:35.975 STDOUT terraform:  + force_destroy = false 2025-05-13 22:37:35.975537 | orchestrator | 22:37:35.975 STDOUT terraform:  + id = (known after apply) 2025-05-13 22:37:35.975566 | orchestrator | 22:37:35.975 STDOUT terraform:  + port_id = (known after apply) 2025-05-13 22:37:35.975595 | orchestrator | 22:37:35.975 STDOUT terraform:  + region = (known after apply) 2025-05-13 22:37:35.975625 | orchestrator | 22:37:35.975 STDOUT terraform:  + router_id = (known after apply) 2025-05-13 22:37:35.975655 | orchestrator | 22:37:35.975 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-13 22:37:35.975664 | orchestrator | 22:37:35.975 STDOUT terraform:  } 2025-05-13 22:37:35.975703 | orchestrator | 22:37:35.975 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-05-13 22:37:35.975738 | orchestrator | 22:37:35.975 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-05-13 22:37:35.975780 | orchestrator | 22:37:35.975 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-13 22:37:35.975813 | orchestrator | 22:37:35.975 STDOUT terraform:  + all_tags = (known after apply) 2025-05-13 22:37:35.975843 | orchestrator | 22:37:35.975 STDOUT terraform:  + availability_zone_hints = [ 2025-05-13 22:37:35.975850 | orchestrator | 22:37:35.975 STDOUT terraform:  + "nova", 2025-05-13 22:37:35.975858 | orchestrator | 22:37:35.975 STDOUT terraform:  ] 2025-05-13 22:37:35.975893 | orchestrator | 22:37:35.975 STDOUT terraform:  + distributed = (known after apply) 2025-05-13 22:37:35.975930 | orchestrator | 22:37:35.975 STDOUT terraform:  + enable_snat = (known after apply) 2025-05-13 22:37:35.976003 | orchestrator | 22:37:35.975 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-05-13 22:37:35.976040 | orchestrator | 22:37:35.975 STDOUT terraform:  + id = (known after apply) 2025-05-13 22:37:35.976069 | orchestrator | 22:37:35.976 STDOUT terraform:  + name = "testbed" 2025-05-13 22:37:35.976108 | orchestrator | 22:37:35.976 STDOUT terraform:  + region = (known after apply) 2025-05-13 22:37:35.976145 | orchestrator | 22:37:35.976 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-13 22:37:35.976175 | orchestrator | 22:37:35.976 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-05-13 22:37:35.976184 | orchestrator | 22:37:35.976 STDOUT terraform:  } 2025-05-13 22:37:35.976239 | orchestrator | 22:37:35.976 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-05-13 22:37:35.976293 | orchestrator | 22:37:35.976 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-05-13 22:37:35.976302 | orchestrator | 22:37:35.976 STDOUT terraform:  + description = "ssh" 2025-05-13 22:37:35.976336 | orchestrator | 22:37:35.976 STDOUT terraform:  + direction = "ingress" 2025-05-13 22:37:35.976359 | orchestrator | 22:37:35.976 STDOUT terraform:  + ethertype = "IPv4" 2025-05-13 22:37:35.976389 | orchestrator | 22:37:35.976 STDOUT terraform:  + id = (known after apply) 2025-05-13 22:37:35.976418 | orchestrator | 22:37:35.976 STDOUT terraform:  + port_range_max = 22 2025-05-13 22:37:35.976427 | orchestrator | 22:37:35.976 STDOUT terraform:  + port_range_min = 22 2025-05-13 22:37:35.976441 | orchestrator | 22:37:35.976 STDOUT terraform:  + protocol = "tcp" 2025-05-13 22:37:35.976476 | orchestrator | 22:37:35.976 STDOUT terraform:  + region = (known after apply) 2025-05-13 22:37:35.976505 | orchestrator | 22:37:35.976 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-13 22:37:35.976528 | orchestrator | 22:37:35.976 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-13 22:37:35.976557 | orchestrator | 22:37:35.976 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-13 22:37:35.976588 | orchestrator | 22:37:35.976 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-13 22:37:35.976600 | orchestrator | 22:37:35.976 STDOUT terraform:  } 2025-05-13 22:37:35.976651 | orchestrator | 22:37:35.976 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-05-13 22:37:35.976704 | orchestrator | 22:37:35.976 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-05-13 22:37:35.976727 | orchestrator | 22:37:35.976 STDOUT terraform:  + description = "wireguard" 2025-05-13 22:37:35.976756 | orchestrator | 22:37:35.976 STDOUT terraform:  + direction = "ingress" 2025-05-13 22:37:35.976765 | orchestrator | 22:37:35.976 STDOUT terraform:  + ethertype = "IPv4" 2025-05-13 22:37:35.976798 | orchestrator | 22:37:35.976 STDOUT terraform:  + id = (known after apply) 2025-05-13 22:37:35.976807 | orchestrator | 22:37:35.976 STDOUT terraform:  + port_range_max = 51820 2025-05-13 22:37:35.976835 | orchestrator | 22:37:35.976 STDOUT terraform:  + port_range_min = 51820 2025-05-13 22:37:35.976845 | orchestrator | 22:37:35.976 STDOUT terraform:  + protocol = "udp" 2025-05-13 22:37:35.976884 | orchestrator | 22:37:35.976 STDOUT terraform:  + region = (known after apply) 2025-05-13 22:37:35.980891 | orchestrator | 22:37:35.976 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-13 22:37:35.980933 | orchestrator | 22:37:35.976 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-13 22:37:35.980938 | orchestrator | 22:37:35.976 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-13 22:37:35.980943 | orchestrator | 22:37:35.976 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-13 22:37:35.980947 | orchestrator | 22:37:35.976 STDOUT terraform:  } 2025-05-13 22:37:35.980952 | orchestrator | 22:37:35.977 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-05-13 22:37:35.980993 | orchestrator | 22:37:35.977 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-05-13 22:37:35.981000 | orchestrator | 22:37:35.977 STDOUT terraform:  + direction = "ingress" 2025-05-13 22:37:35.981007 | orchestrator | 22:37:35.977 STDOUT terraform:  + ethertype = "IPv4" 2025-05-13 22:37:35.981011 | orchestrator | 22:37:35.977 STDOUT terraform:  + id = (known after apply) 2025-05-13 22:37:35.981015 | orchestrator | 22:37:35.977 STDOUT terraform:  + protocol = "tcp" 2025-05-13 22:37:35.981019 | orchestrator | 22:37:35.977 STDOUT terraform:  + region = (known after apply) 2025-05-13 22:37:35.981033 | orchestrator | 22:37:35.977 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-13 22:37:35.981037 | orchestrator | 22:37:35.977 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-05-13 22:37:35.981041 | orchestrator | 22:37:35.977 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-13 22:37:35.981044 | orchestrator | 22:37:35.977 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-13 22:37:35.981048 | orchestrator | 22:37:35.977 STDOUT terraform:  } 2025-05-13 22:37:35.981052 | orchestrator | 22:37:35.977 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-05-13 22:37:35.981056 | orchestrator | 22:37:35.977 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-05-13 22:37:35.981060 | orchestrator | 22:37:35.977 STDOUT terraform:  + direction = "ingress" 2025-05-13 22:37:35.981064 | orchestrator | 22:37:35.977 STDOUT terraform:  + ethertype = "IPv4" 2025-05-13 22:37:35.981068 | orchestrator | 22:37:35.977 STDOUT terraform:  + id = (known after apply) 2025-05-13 22:37:35.981078 | orchestrator | 22:37:35.977 STDOUT terraform:  + protocol = "udp" 2025-05-13 22:37:35.981082 | orchestrator | 22:37:35.977 STDOUT terraform:  + region = (known after apply) 2025-05-13 22:37:35.981087 | orchestrator | 22:37:35.977 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-13 22:37:35.981093 | orchestrator | 22:37:35.977 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-05-13 22:37:35.981099 | orchestrator | 22:37:35.977 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-13 22:37:35.981108 | orchestrator | 22:37:35.977 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-13 22:37:35.981114 | orchestrator | 22:37:35.977 STDOUT terraform:  } 2025-05-13 22:37:35.981121 | orchestrator | 22:37:35.977 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-05-13 22:37:35.981127 | orchestrator | 22:37:35.977 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-05-13 22:37:35.981132 | orchestrator | 22:37:35.977 STDOUT terraform:  + direction = "ingress" 2025-05-13 22:37:35.981138 | orchestrator | 22:37:35.977 STDOUT terraform:  + ethertype = "IPv4" 2025-05-13 22:37:35.981143 | orchestrator | 22:37:35.977 STDOUT terraform:  + id = (known after apply) 2025-05-13 22:37:35.981149 | orchestrator | 22:37:35.977 STDOUT terraform:  + protocol = "icmp" 2025-05-13 22:37:35.981158 | orchestrator | 22:37:35.977 STDOUT terraform:  + region = (known after apply) 2025-05-13 22:37:35.981174 | orchestrator | 22:37:35.977 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-13 22:37:35.981180 | orchestrator | 22:37:35.977 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-13 22:37:35.981185 | orchestrator | 22:37:35.977 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-13 22:37:35.981190 | orchestrator | 22:37:35.977 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-13 22:37:35.981195 | orchestrator | 22:37:35.977 STDOUT terraform:  } 2025-05-13 22:37:35.981201 | orchestrator | 22:37:35.977 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-05-13 22:37:35.981212 | orchestrator | 22:37:35.978 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-05-13 22:37:35.981217 | orchestrator | 22:37:35.978 STDOUT terraform:  + direction = "ingress" 2025-05-13 22:37:35.981223 | orchestrator | 22:37:35.978 STDOUT terraform:  + ethertype = "IPv4" 2025-05-13 22:37:35.981228 | orchestrator | 22:37:35.978 STDOUT terraform:  + id = (known after apply) 2025-05-13 22:37:35.981234 | orchestrator | 22:37:35.978 STDOUT terraform:  + protocol = "tcp" 2025-05-13 22:37:35.981240 | orchestrator | 22:37:35.978 STDOUT terraform:  + region = (known after apply) 2025-05-13 22:37:35.981246 | orchestrator | 22:37:35.978 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-13 22:37:35.981252 | orchestrator | 22:37:35.978 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-13 22:37:35.981258 | orchestrator | 22:37:35.978 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-13 22:37:35.981264 | orchestrator | 22:37:35.978 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-13 22:37:35.981270 | orchestrator | 22:37:35.978 STDOUT terraform:  } 2025-05-13 22:37:35.981276 | orchestrator | 22:37:35.978 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-05-13 22:37:35.981282 | orchestrator | 22:37:35.978 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-05-13 22:37:35.981288 | orchestrator | 22:37:35.978 STDOUT terraform:  + direction = "ingress" 2025-05-13 22:37:35.981293 | orchestrator | 22:37:35.978 STDOUT terraform:  + ethertype = "IPv4" 2025-05-13 22:37:35.981299 | orchestrator | 22:37:35.978 STDOUT terraform:  + id = (known after apply) 2025-05-13 22:37:35.981304 | orchestrator | 22:37:35.978 STDOUT terraform:  + protocol = "udp" 2025-05-13 22:37:35.981310 | orchestrator | 22:37:35.978 STDOUT terraform:  + region = (known after apply) 2025-05-13 22:37:35.981316 | orchestrator | 22:37:35.978 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-13 22:37:35.981322 | orchestrator | 22:37:35.978 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-13 22:37:35.981328 | orchestrator | 22:37:35.978 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-13 22:37:35.981334 | orchestrator | 22:37:35.978 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-13 22:37:35.981341 | orchestrator | 22:37:35.978 STDOUT terraform:  } 2025-05-13 22:37:35.981345 | orchestrator | 22:37:35.978 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-05-13 22:37:35.981349 | orchestrator | 22:37:35.978 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-05-13 22:37:35.981353 | orchestrator | 22:37:35.978 STDOUT terraform:  + direction = "ingress" 2025-05-13 22:37:35.981357 | orchestrator | 22:37:35.978 STDOUT terraform:  + ethertype = "IPv4" 2025-05-13 22:37:35.981361 | orchestrator | 22:37:35.978 STDOUT terraform:  + id = (known after apply) 2025-05-13 22:37:35.981364 | orchestrator | 22:37:35.978 STDOUT terraform:  + protocol = "icmp" 2025-05-13 22:37:35.981372 | orchestrator | 22:37:35.978 STDOUT terraform:  + region = (known after apply) 2025-05-13 22:37:35.981378 | orchestrator | 22:37:35.978 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-13 22:37:35.981386 | orchestrator | 22:37:35.978 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-13 22:37:35.981390 | orchestrator | 22:37:35.978 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-13 22:37:35.981394 | orchestrator | 22:37:35.978 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-13 22:37:35.981398 | orchestrator | 22:37:35.978 STDOUT terraform:  } 2025-05-13 22:37:35.981402 | orchestrator | 22:37:35.978 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-05-13 22:37:35.981406 | orchestrator | 22:37:35.979 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-05-13 22:37:35.981410 | orchestrator | 22:37:35.979 STDOUT terraform:  + description = "vrrp" 2025-05-13 22:37:35.981414 | orchestrator | 22:37:35.979 STDOUT terraform:  + direction = "ingress" 2025-05-13 22:37:35.981418 | orchestrator | 22:37:35.979 STDOUT terraform:  + ethertype = "IPv4" 2025-05-13 22:37:35.981422 | orchestrator | 22:37:35.979 STDOUT terraform:  + id = (known after apply) 2025-05-13 22:37:35.981425 | orchestrator | 22:37:35.979 STDOUT terraform:  + protocol = "112" 2025-05-13 22:37:35.981429 | orchestrator | 22:37:35.979 STDOUT terraform:  + region = (known after apply) 2025-05-13 22:37:35.981433 | orchestrator | 22:37:35.979 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-13 22:37:35.981437 | orchestrator | 22:37:35.979 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-13 22:37:35.981440 | orchestrator | 22:37:35.979 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-13 22:37:35.981444 | orchestrator | 22:37:35.979 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-13 22:37:35.981448 | orchestrator | 22:37:35.979 STDOUT terraform:  } 2025-05-13 22:37:35.981452 | orchestrator | 22:37:35.979 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-05-13 22:37:35.981456 | orchestrator | 22:37:35.979 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-05-13 22:37:35.981460 | orchestrator | 22:37:35.979 STDOUT terraform:  + all_tags = (known after apply) 2025-05-13 22:37:35.981463 | orchestrator | 22:37:35.979 STDOUT terraform:  + description = "management security group" 2025-05-13 22:37:35.981467 | orchestrator | 22:37:35.979 STDOUT terraform:  + id = (known after apply) 2025-05-13 22:37:35.981471 | orchestrator | 22:37:35.979 STDOUT terraform:  + name = "testbed-management" 2025-05-13 22:37:35.981475 | orchestrator | 22:37:35.979 STDOUT terraform:  + region = (known after apply) 2025-05-13 22:37:35.981479 | orchestrator | 22:37:35.979 STDOUT terraform:  + stateful = (known after apply) 2025-05-13 22:37:35.981482 | orchestrator | 22:37:35.979 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-13 22:37:35.981486 | orchestrator | 22:37:35.979 STDOUT terraform:  } 2025-05-13 22:37:35.981494 | orchestrator | 22:37:35.979 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-05-13 22:37:35.981498 | orchestrator | 22:37:35.979 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-05-13 22:37:35.981502 | orchestrator | 22:37:35.979 STDOUT terraform:  + all_tags = (known after apply) 2025-05-13 22:37:35.981506 | orchestrator | 22:37:35.979 STDOUT terraform:  + description = "node security group" 2025-05-13 22:37:35.981509 | orchestrator | 22:37:35.979 STDOUT terraform:  + id = (known after apply) 2025-05-13 22:37:35.981513 | orchestrator | 22:37:35.979 STDOUT terraform:  + name = "testbed-node" 2025-05-13 22:37:35.981517 | orchestrator | 22:37:35.979 STDOUT terraform:  + region = (known after apply) 2025-05-13 22:37:35.981521 | orchestrator | 22:37:35.979 STDOUT terraform:  + stateful = (known after apply) 2025-05-13 22:37:35.981524 | orchestrator | 22:37:35.979 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-13 22:37:35.981528 | orchestrator | 22:37:35.979 STDOUT terraform:  } 2025-05-13 22:37:35.981537 | orchestrator | 22:37:35.979 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-05-13 22:37:35.981544 | orchestrator | 22:37:35.980 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-05-13 22:37:35.981549 | orchestrator | 22:37:35.980 STDOUT terraform:  + all_tags = (known after apply) 2025-05-13 22:37:35.981552 | orchestrator | 22:37:35.980 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-05-13 22:37:35.981556 | orchestrator | 22:37:35.980 STDOUT terraform:  + dns_nameservers = [ 2025-05-13 22:37:35.981560 | orchestrator | 22:37:35.980 STDOUT terraform:  + "8.8.8.8", 2025-05-13 22:37:35.981564 | orchestrator | 22:37:35.980 STDOUT terraform:  + "9.9.9.9", 2025-05-13 22:37:35.981568 | orchestrator | 22:37:35.980 STDOUT terraform:  ] 2025-05-13 22:37:35.981571 | orchestrator | 22:37:35.980 STDOUT terraform:  + enable_dhcp = true 2025-05-13 22:37:35.981575 | orchestrator | 22:37:35.980 STDOUT terraform:  + gateway_ip = (known after apply) 2025-05-13 22:37:35.981579 | orchestrator | 22:37:35.980 STDOUT terraform:  + id = (known after apply) 2025-05-13 22:37:35.981583 | orchestrator | 22:37:35.980 STDOUT terraform:  + ip_version = 4 2025-05-13 22:37:35.981586 | orchestrator | 22:37:35.980 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-05-13 22:37:35.981590 | orchestrator | 22:37:35.980 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-05-13 22:37:35.981594 | orchestrator | 22:37:35.980 STDOUT terraform:  + name = "subnet-testbed-management" 2025-05-13 22:37:35.981598 | orchestrator | 22:37:35.980 STDOUT terraform:  + network_id = (known after apply) 2025-05-13 22:37:35.981602 | orchestrator | 22:37:35.980 STDOUT terraform:  + no_gateway = false 2025-05-13 22:37:35.981605 | orchestrator | 22:37:35.980 STDOUT terraform:  + region = (known after apply) 2025-05-13 22:37:35.981609 | orchestrator | 22:37:35.980 STDOUT terraform:  + service_types = (known after apply) 2025-05-13 22:37:35.981613 | orchestrator | 22:37:35.980 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-13 22:37:35.981620 | orchestrator | 22:37:35.980 STDOUT terraform:  + allocation_pool { 2025-05-13 22:37:35.981624 | orchestrator | 22:37:35.980 STDOUT terraform:  + end = "192.168.31.250" 2025-05-13 22:37:35.981628 | orchestrator | 22:37:35.980 STDOUT terraform:  + start = "192.168.31.200" 2025-05-13 22:37:35.981631 | orchestrator | 22:37:35.980 STDOUT terraform:  } 2025-05-13 22:37:35.981635 | orchestrator | 22:37:35.980 STDOUT terraform:  } 2025-05-13 22:37:35.981639 | orchestrator | 22:37:35.980 STDOUT terraform:  # terraform_data.image will be created 2025-05-13 22:37:35.981643 | orchestrator | 22:37:35.980 STDOUT terraform:  + resource "terraform_data" "image" { 2025-05-13 22:37:35.981646 | orchestrator | 22:37:35.980 STDOUT terraform:  + id = (known after apply) 2025-05-13 22:37:35.981650 | orchestrator | 22:37:35.980 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-05-13 22:37:35.981654 | orchestrator | 22:37:35.980 STDOUT terraform:  + output = (known after apply) 2025-05-13 22:37:35.981658 | orchestrator | 22:37:35.980 STDOUT terraform:  } 2025-05-13 22:37:35.981661 | orchestrator | 22:37:35.980 STDOUT terraform:  # terraform_data.image_node will be created 2025-05-13 22:37:35.981665 | orchestrator | 22:37:35.980 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-05-13 22:37:35.981669 | orchestrator | 22:37:35.980 STDOUT terraform:  + id = (known after apply) 2025-05-13 22:37:35.981673 | orchestrator | 22:37:35.980 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-05-13 22:37:35.981676 | orchestrator | 22:37:35.980 STDOUT terraform:  + output = (known after apply) 2025-05-13 22:37:35.981680 | orchestrator | 22:37:35.980 STDOUT terraform:  } 2025-05-13 22:37:35.981684 | orchestrator | 22:37:35.980 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-05-13 22:37:35.981688 | orchestrator | 22:37:35.980 STDOUT terraform: Changes to Outputs: 2025-05-13 22:37:35.981692 | orchestrator | 22:37:35.980 STDOUT terraform:  + manager_address = (sensitive value) 2025-05-13 22:37:35.981696 | orchestrator | 22:37:35.980 STDOUT terraform:  + private_key = (sensitive value) 2025-05-13 22:37:36.192604 | orchestrator | 22:37:36.192 STDOUT terraform: terraform_data.image_node: Creating... 2025-05-13 22:37:36.192673 | orchestrator | 22:37:36.192 STDOUT terraform: terraform_data.image: Creating... 2025-05-13 22:37:36.192765 | orchestrator | 22:37:36.192 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=77075d78-e599-6808-3871-12c8018587d4] 2025-05-13 22:37:36.192910 | orchestrator | 22:37:36.192 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=4d1ff013-df7c-1fdf-2d58-f86f09ec07d8] 2025-05-13 22:37:36.204774 | orchestrator | 22:37:36.202 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-05-13 22:37:36.207786 | orchestrator | 22:37:36.207 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-05-13 22:37:36.212161 | orchestrator | 22:37:36.212 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-05-13 22:37:36.214085 | orchestrator | 22:37:36.213 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-05-13 22:37:36.215651 | orchestrator | 22:37:36.215 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-05-13 22:37:36.216295 | orchestrator | 22:37:36.216 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-05-13 22:37:36.218997 | orchestrator | 22:37:36.218 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-05-13 22:37:36.219162 | orchestrator | 22:37:36.219 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-05-13 22:37:36.221235 | orchestrator | 22:37:36.221 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-05-13 22:37:36.221692 | orchestrator | 22:37:36.221 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-05-13 22:37:37.003754 | orchestrator | 22:37:37.002 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 1s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-05-13 22:37:37.004584 | orchestrator | 22:37:37.004 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 1s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-05-13 22:37:37.015736 | orchestrator | 22:37:37.015 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-05-13 22:37:37.017814 | orchestrator | 22:37:37.017 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-05-13 22:37:38.049663 | orchestrator | 22:37:38.049 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 2s [id=testbed] 2025-05-13 22:37:38.058125 | orchestrator | 22:37:38.057 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-05-13 22:37:43.352022 | orchestrator | 22:37:43.351 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 7s [id=88e77150-5cf1-41c3-89c1-3e146fcb6bc4] 2025-05-13 22:37:43.367849 | orchestrator | 22:37:43.367 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-05-13 22:37:46.209356 | orchestrator | 22:37:46.208 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Still creating... [10s elapsed] 2025-05-13 22:37:46.215735 | orchestrator | 22:37:46.215 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Still creating... [10s elapsed] 2025-05-13 22:37:46.217828 | orchestrator | 22:37:46.217 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Still creating... [10s elapsed] 2025-05-13 22:37:46.220122 | orchestrator | 22:37:46.219 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Still creating... [10s elapsed] 2025-05-13 22:37:46.222390 | orchestrator | 22:37:46.222 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Still creating... [10s elapsed] 2025-05-13 22:37:46.222499 | orchestrator | 22:37:46.222 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Still creating... [10s elapsed] 2025-05-13 22:37:46.795758 | orchestrator | 22:37:46.795 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 11s [id=c475673a-0096-49dd-a2ab-dba7e6677c05] 2025-05-13 22:37:46.805377 | orchestrator | 22:37:46.805 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-05-13 22:37:46.828885 | orchestrator | 22:37:46.828 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 11s [id=2123f305-4e6b-4736-99ab-18aaa07aaf45] 2025-05-13 22:37:46.833210 | orchestrator | 22:37:46.832 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 11s [id=213ab59a-cb73-4407-9705-0b2ca8256438] 2025-05-13 22:37:46.835643 | orchestrator | 22:37:46.835 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-05-13 22:37:46.838539 | orchestrator | 22:37:46.838 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-05-13 22:37:46.853126 | orchestrator | 22:37:46.852 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 11s [id=61dae38b-1d40-412d-9df6-8d9734e6ced8] 2025-05-13 22:37:46.865644 | orchestrator | 22:37:46.864 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 11s [id=0156a383-42b8-4f65-bebb-758e8d549677] 2025-05-13 22:37:46.866780 | orchestrator | 22:37:46.866 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-05-13 22:37:46.869926 | orchestrator | 22:37:46.869 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-05-13 22:37:46.874859 | orchestrator | 22:37:46.874 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 11s [id=0aeac9b9-4df2-4d9e-975e-68588115061e] 2025-05-13 22:37:46.879831 | orchestrator | 22:37:46.879 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-05-13 22:37:47.017726 | orchestrator | 22:37:47.017 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Still creating... [10s elapsed] 2025-05-13 22:37:47.018114 | orchestrator | 22:37:47.017 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Still creating... [10s elapsed] 2025-05-13 22:37:47.193778 | orchestrator | 22:37:47.193 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 10s [id=a5357627-6c2a-405a-984b-26b28125b648] 2025-05-13 22:37:47.204541 | orchestrator | 22:37:47.204 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-05-13 22:37:47.214366 | orchestrator | 22:37:47.214 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=017cd50a5acb708c44000de7b8f0a12ed924c049] 2025-05-13 22:37:47.216922 | orchestrator | 22:37:47.216 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 10s [id=46243ec1-9f30-4dd7-b280-49f134625000] 2025-05-13 22:37:47.217304 | orchestrator | 22:37:47.217 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-05-13 22:37:47.222641 | orchestrator | 22:37:47.222 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=0de67745672d60c7f85fa1243889d57d50f8064e] 2025-05-13 22:37:47.224183 | orchestrator | 22:37:47.224 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-05-13 22:37:48.059592 | orchestrator | 22:37:48.059 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Still creating... [10s elapsed] 2025-05-13 22:37:48.254501 | orchestrator | 22:37:48.254 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 10s [id=55ed4948-9fe5-49ab-9e57-6f6f508ce8e3] 2025-05-13 22:37:53.371525 | orchestrator | 22:37:53.371 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Still creating... [10s elapsed] 2025-05-13 22:37:53.700155 | orchestrator | 22:37:53.699 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 11s [id=7a0cda05-6059-4279-9091-38c6851dd1b0] 2025-05-13 22:37:53.949783 | orchestrator | 22:37:53.949 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 7s [id=639921c0-8e0a-4da6-97ab-c3b678dffdfd] 2025-05-13 22:37:53.958303 | orchestrator | 22:37:53.958 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-05-13 22:37:56.806135 | orchestrator | 22:37:56.805 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Still creating... [10s elapsed] 2025-05-13 22:37:56.836431 | orchestrator | 22:37:56.836 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Still creating... [10s elapsed] 2025-05-13 22:37:56.839717 | orchestrator | 22:37:56.839 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Still creating... [10s elapsed] 2025-05-13 22:37:56.867488 | orchestrator | 22:37:56.867 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Still creating... [10s elapsed] 2025-05-13 22:37:56.870825 | orchestrator | 22:37:56.870 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Still creating... [10s elapsed] 2025-05-13 22:37:56.881012 | orchestrator | 22:37:56.880 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Still creating... [10s elapsed] 2025-05-13 22:37:57.176848 | orchestrator | 22:37:57.176 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 10s [id=14cb708c-4d88-41dd-af1a-38adc7d81bad] 2025-05-13 22:37:57.207082 | orchestrator | 22:37:57.206 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 10s [id=742983f3-e890-4b21-9db6-0cea970b685b] 2025-05-13 22:37:57.241093 | orchestrator | 22:37:57.240 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 10s [id=c6453a8e-6632-42ad-a179-435c946212ec] 2025-05-13 22:37:57.258277 | orchestrator | 22:37:57.257 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 10s [id=318ab0b7-de56-4f87-ab50-209f607532c7] 2025-05-13 22:37:57.269354 | orchestrator | 22:37:57.268 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 10s [id=12c49378-a079-4e0e-98f3-678427126c28] 2025-05-13 22:37:57.275592 | orchestrator | 22:37:57.275 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 10s [id=b255196f-0cab-4746-bd7d-248a31197f78] 2025-05-13 22:38:03.804935 | orchestrator | 22:38:03.804 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 10s [id=fdff905e-79fd-4609-bc93-b5db4ba7a547] 2025-05-13 22:38:03.810448 | orchestrator | 22:38:03.810 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-05-13 22:38:03.811552 | orchestrator | 22:38:03.811 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-05-13 22:38:03.815427 | orchestrator | 22:38:03.815 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-05-13 22:38:04.637516 | orchestrator | 22:38:04.637 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 1s [id=db7015f1-9635-4528-87ee-879ff7e8884f] 2025-05-13 22:38:04.644324 | orchestrator | 22:38:04.643 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-05-13 22:38:04.646099 | orchestrator | 22:38:04.645 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-05-13 22:38:04.647258 | orchestrator | 22:38:04.646 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 1s [id=af48cd24-0407-41ff-b5ef-1ec5df0463d4] 2025-05-13 22:38:04.653822 | orchestrator | 22:38:04.653 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-05-13 22:38:04.655162 | orchestrator | 22:38:04.654 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-05-13 22:38:04.655802 | orchestrator | 22:38:04.655 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-05-13 22:38:04.657480 | orchestrator | 22:38:04.657 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-05-13 22:38:04.657669 | orchestrator | 22:38:04.657 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-05-13 22:38:04.657687 | orchestrator | 22:38:04.657 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-05-13 22:38:04.660232 | orchestrator | 22:38:04.660 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-05-13 22:38:04.761733 | orchestrator | 22:38:04.761 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 0s [id=0d2cbe29-606e-4e9e-9220-5c4d257ca743] 2025-05-13 22:38:04.769194 | orchestrator | 22:38:04.768 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-05-13 22:38:04.882144 | orchestrator | 22:38:04.881 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=951cc800-f013-435a-b2ac-bcfdc14d9333] 2025-05-13 22:38:04.882855 | orchestrator | 22:38:04.882 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=14967e38-b201-43f8-836b-e93bf62b8bff] 2025-05-13 22:38:04.897539 | orchestrator | 22:38:04.897 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-05-13 22:38:04.897632 | orchestrator | 22:38:04.897 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-05-13 22:38:05.010516 | orchestrator | 22:38:05.010 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=d7b7ef51-5c27-44a2-90f2-d462214a878b] 2025-05-13 22:38:05.018834 | orchestrator | 22:38:05.018 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 0s [id=a61fefae-2ff3-4a60-8fd9-1412e18b544e] 2025-05-13 22:38:05.027692 | orchestrator | 22:38:05.027 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-05-13 22:38:05.031224 | orchestrator | 22:38:05.031 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-05-13 22:38:05.187008 | orchestrator | 22:38:05.186 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=72b561e4-a98a-435f-90bf-fcc98c343c13] 2025-05-13 22:38:05.200028 | orchestrator | 22:38:05.199 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-05-13 22:38:05.316991 | orchestrator | 22:38:05.316 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=dff50fe2-f7de-460d-b326-b3bbc8deffef] 2025-05-13 22:38:05.328614 | orchestrator | 22:38:05.328 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-05-13 22:38:05.425952 | orchestrator | 22:38:05.425 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=4aa3fce5-2ccc-444f-a8dc-21f3211679ed] 2025-05-13 22:38:05.443769 | orchestrator | 22:38:05.443 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=4daa1267-44ef-49b6-8d11-cf408e3f73a2] 2025-05-13 22:38:10.415299 | orchestrator | 22:38:10.414 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 6s [id=d0e1e523-d512-440a-afff-1b02d52775ca] 2025-05-13 22:38:10.421985 | orchestrator | 22:38:10.421 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-05-13 22:38:10.669688 | orchestrator | 22:38:10.669 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 6s [id=f33837cc-6428-4261-91c3-70c1d6502de9] 2025-05-13 22:38:10.852188 | orchestrator | 22:38:10.851 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 6s [id=192dee38-73fc-48be-9bde-82e56e6cf4b3] 2025-05-13 22:38:10.980913 | orchestrator | 22:38:10.980 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 6s [id=8a4a9ceb-a748-49a2-a878-0f869dd4c2a9] 2025-05-13 22:38:11.067411 | orchestrator | 22:38:11.067 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 6s [id=f8495074-a76c-4b95-ab7d-d36cf8150f42] 2025-05-13 22:38:11.113707 | orchestrator | 22:38:11.113 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 6s [id=4769eab7-1798-46c7-a109-5dac0eedcb98] 2025-05-13 22:38:11.130058 | orchestrator | 22:38:11.129 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 6s [id=720b83e2-0c6a-4d00-bc18-b6721d1fabfd] 2025-05-13 22:38:11.386905 | orchestrator | 22:38:11.386 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 6s [id=5be32905-1aca-4fc8-a27c-c2752e73b72a] 2025-05-13 22:38:11.421345 | orchestrator | 22:38:11.421 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-05-13 22:38:11.421666 | orchestrator | 22:38:11.421 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-05-13 22:38:11.424084 | orchestrator | 22:38:11.423 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-05-13 22:38:11.437771 | orchestrator | 22:38:11.437 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-05-13 22:38:11.440747 | orchestrator | 22:38:11.440 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-05-13 22:38:11.445461 | orchestrator | 22:38:11.445 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-05-13 22:38:16.742690 | orchestrator | 22:38:16.742 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 7s [id=dbd3e574-f1b5-41bb-b7c3-2aa384f593f9] 2025-05-13 22:38:16.753714 | orchestrator | 22:38:16.752 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-05-13 22:38:16.760020 | orchestrator | 22:38:16.759 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-05-13 22:38:16.762754 | orchestrator | 22:38:16.762 STDOUT terraform: local_file.inventory: Creating... 2025-05-13 22:38:16.766923 | orchestrator | 22:38:16.766 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=c9046d1b931c6c59597c06f6cf5de4e7e86242f6] 2025-05-13 22:38:16.769186 | orchestrator | 22:38:16.769 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=78f685394544c6a7422cb0baa81b7b122930fa52] 2025-05-13 22:38:17.250544 | orchestrator | 22:38:17.250 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 0s [id=dbd3e574-f1b5-41bb-b7c3-2aa384f593f9] 2025-05-13 22:38:21.422402 | orchestrator | 22:38:21.421 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-05-13 22:38:21.422612 | orchestrator | 22:38:21.422 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-05-13 22:38:21.429222 | orchestrator | 22:38:21.428 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-05-13 22:38:21.445515 | orchestrator | 22:38:21.445 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-05-13 22:38:21.445623 | orchestrator | 22:38:21.445 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-05-13 22:38:21.448743 | orchestrator | 22:38:21.448 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-05-13 22:38:31.423579 | orchestrator | 22:38:31.423 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-05-13 22:38:31.423707 | orchestrator | 22:38:31.423 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-05-13 22:38:31.429684 | orchestrator | 22:38:31.429 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-05-13 22:38:31.446995 | orchestrator | 22:38:31.446 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-05-13 22:38:31.447093 | orchestrator | 22:38:31.446 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-05-13 22:38:31.449096 | orchestrator | 22:38:31.448 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-05-13 22:38:31.792822 | orchestrator | 22:38:31.792 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 21s [id=e40a66a8-0605-4926-acf4-d023833ed357] 2025-05-13 22:38:31.908888 | orchestrator | 22:38:31.908 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 21s [id=6edbb8c1-2bd3-427f-982e-767abde4b3be] 2025-05-13 22:38:32.445766 | orchestrator | 22:38:32.445 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 21s [id=f5fbbe8d-4cfb-4012-aeb8-29dfd16c7ff7] 2025-05-13 22:38:41.423699 | orchestrator | 22:38:41.423 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2025-05-13 22:38:41.429958 | orchestrator | 22:38:41.429 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2025-05-13 22:38:41.447545 | orchestrator | 22:38:41.447 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-05-13 22:38:41.759420 | orchestrator | 22:38:41.759 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 31s [id=3cb8fa4d-2259-4dd5-846a-3f22f011349a] 2025-05-13 22:38:41.930203 | orchestrator | 22:38:41.929 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 31s [id=79992a61-3632-4549-83d6-2636fbe9f7e0] 2025-05-13 22:38:41.934439 | orchestrator | 22:38:41.934 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 31s [id=5536d126-1c3c-46a8-a930-1bd386f754e0] 2025-05-13 22:38:41.949016 | orchestrator | 22:38:41.948 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-05-13 22:38:41.965247 | orchestrator | 22:38:41.965 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-05-13 22:38:41.966789 | orchestrator | 22:38:41.966 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=5651675148947204669] 2025-05-13 22:38:41.972294 | orchestrator | 22:38:41.972 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-05-13 22:38:41.972640 | orchestrator | 22:38:41.972 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-05-13 22:38:41.974469 | orchestrator | 22:38:41.974 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-05-13 22:38:41.975755 | orchestrator | 22:38:41.975 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-05-13 22:38:41.975974 | orchestrator | 22:38:41.975 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-05-13 22:38:41.983608 | orchestrator | 22:38:41.983 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-05-13 22:38:41.990740 | orchestrator | 22:38:41.990 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-05-13 22:38:41.992721 | orchestrator | 22:38:41.992 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-05-13 22:38:42.012051 | orchestrator | 22:38:42.011 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-05-13 22:38:47.291598 | orchestrator | 22:38:47.291 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 5s [id=6edbb8c1-2bd3-427f-982e-767abde4b3be/213ab59a-cb73-4407-9705-0b2ca8256438] 2025-05-13 22:38:47.310143 | orchestrator | 22:38:47.309 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 5s [id=f5fbbe8d-4cfb-4012-aeb8-29dfd16c7ff7/55ed4948-9fe5-49ab-9e57-6f6f508ce8e3] 2025-05-13 22:38:47.322210 | orchestrator | 22:38:47.321 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 5s [id=79992a61-3632-4549-83d6-2636fbe9f7e0/0156a383-42b8-4f65-bebb-758e8d549677] 2025-05-13 22:38:47.353259 | orchestrator | 22:38:47.352 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 5s [id=6edbb8c1-2bd3-427f-982e-767abde4b3be/46243ec1-9f30-4dd7-b280-49f134625000] 2025-05-13 22:38:47.353966 | orchestrator | 22:38:47.353 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 5s [id=f5fbbe8d-4cfb-4012-aeb8-29dfd16c7ff7/0aeac9b9-4df2-4d9e-975e-68588115061e] 2025-05-13 22:38:47.383874 | orchestrator | 22:38:47.383 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 5s [id=79992a61-3632-4549-83d6-2636fbe9f7e0/a5357627-6c2a-405a-984b-26b28125b648] 2025-05-13 22:38:47.394820 | orchestrator | 22:38:47.394 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 5s [id=79992a61-3632-4549-83d6-2636fbe9f7e0/c475673a-0096-49dd-a2ab-dba7e6677c05] 2025-05-13 22:38:47.405344 | orchestrator | 22:38:47.404 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 5s [id=6edbb8c1-2bd3-427f-982e-767abde4b3be/2123f305-4e6b-4736-99ab-18aaa07aaf45] 2025-05-13 22:38:47.408502 | orchestrator | 22:38:47.407 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 5s [id=f5fbbe8d-4cfb-4012-aeb8-29dfd16c7ff7/61dae38b-1d40-412d-9df6-8d9734e6ced8] 2025-05-13 22:38:52.012964 | orchestrator | 22:38:52.012 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-05-13 22:38:56.197952 | orchestrator | 22:38:56.197 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 14s [id=7b714c7f-2383-4ac4-8f89-c6d85e0a7ce7] 2025-05-13 22:38:56.213021 | orchestrator | 22:38:56.212 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-05-13 22:38:56.213282 | orchestrator | 22:38:56.213 STDOUT terraform: Outputs: 2025-05-13 22:38:56.213414 | orchestrator | 22:38:56.213 STDOUT terraform: manager_address = 2025-05-13 22:38:56.213501 | orchestrator | 22:38:56.213 STDOUT terraform: private_key = 2025-05-13 22:38:56.306962 | orchestrator | ok: Runtime: 0:01:30.572700 2025-05-13 22:38:56.333045 | 2025-05-13 22:38:56.333178 | TASK [Create infrastructure (stable)] 2025-05-13 22:38:56.866399 | orchestrator | skipping: Conditional result was False 2025-05-13 22:38:56.883605 | 2025-05-13 22:38:56.884007 | TASK [Fetch manager address] 2025-05-13 22:38:57.332078 | orchestrator | ok 2025-05-13 22:38:57.342029 | 2025-05-13 22:38:57.342172 | TASK [Set manager_host address] 2025-05-13 22:38:57.414269 | orchestrator | ok 2025-05-13 22:38:57.421023 | 2025-05-13 22:38:57.421141 | LOOP [Update ansible collections] 2025-05-13 22:38:58.308002 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-13 22:38:58.308417 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-05-13 22:38:58.308487 | orchestrator | Starting galaxy collection install process 2025-05-13 22:38:58.308537 | orchestrator | Process install dependency map 2025-05-13 22:38:58.308580 | orchestrator | Starting collection install process 2025-05-13 22:38:58.308622 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons' 2025-05-13 22:38:58.308730 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons 2025-05-13 22:38:58.308788 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-05-13 22:38:58.308888 | orchestrator | ok: Item: commons Runtime: 0:00:00.552706 2025-05-13 22:38:59.144495 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-13 22:38:59.144704 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-05-13 22:38:59.144764 | orchestrator | Starting galaxy collection install process 2025-05-13 22:38:59.144808 | orchestrator | Process install dependency map 2025-05-13 22:38:59.144845 | orchestrator | Starting collection install process 2025-05-13 22:38:59.144881 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed04/.ansible/collections/ansible_collections/osism/services' 2025-05-13 22:38:59.144917 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/services 2025-05-13 22:38:59.144950 | orchestrator | osism.services:999.0.0 was installed successfully 2025-05-13 22:38:59.145002 | orchestrator | ok: Item: services Runtime: 0:00:00.570378 2025-05-13 22:38:59.166278 | 2025-05-13 22:38:59.166492 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-05-13 22:39:10.938548 | orchestrator | ok 2025-05-13 22:39:10.949722 | 2025-05-13 22:39:10.949850 | TASK [Wait a little longer for the manager so that everything is ready] 2025-05-13 22:40:10.990592 | orchestrator | ok 2025-05-13 22:40:10.998152 | 2025-05-13 22:40:10.998266 | TASK [Fetch manager ssh hostkey] 2025-05-13 22:40:12.571645 | orchestrator | Output suppressed because no_log was given 2025-05-13 22:40:12.590371 | 2025-05-13 22:40:12.590560 | TASK [Get ssh keypair from terraform environment] 2025-05-13 22:40:13.133819 | orchestrator | ok: Runtime: 0:00:00.008585 2025-05-13 22:40:13.151559 | 2025-05-13 22:40:13.151747 | TASK [Point out that the following task takes some time and does not give any output] 2025-05-13 22:40:13.200734 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-05-13 22:40:13.209931 | 2025-05-13 22:40:13.210057 | TASK [Run manager part 0] 2025-05-13 22:40:14.118388 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-13 22:40:14.167093 | orchestrator | 2025-05-13 22:40:14.167149 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-05-13 22:40:14.167157 | orchestrator | 2025-05-13 22:40:14.167169 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-05-13 22:40:15.940394 | orchestrator | ok: [testbed-manager] 2025-05-13 22:40:15.940497 | orchestrator | 2025-05-13 22:40:15.940546 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-05-13 22:40:15.940569 | orchestrator | 2025-05-13 22:40:15.940590 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-13 22:40:17.818731 | orchestrator | ok: [testbed-manager] 2025-05-13 22:40:17.818779 | orchestrator | 2025-05-13 22:40:17.818790 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-05-13 22:40:18.448091 | orchestrator | ok: [testbed-manager] 2025-05-13 22:40:18.448238 | orchestrator | 2025-05-13 22:40:18.448285 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-05-13 22:40:18.501804 | orchestrator | skipping: [testbed-manager] 2025-05-13 22:40:18.501869 | orchestrator | 2025-05-13 22:40:18.501880 | orchestrator | TASK [Update package cache] **************************************************** 2025-05-13 22:40:18.536341 | orchestrator | skipping: [testbed-manager] 2025-05-13 22:40:18.536415 | orchestrator | 2025-05-13 22:40:18.536423 | orchestrator | TASK [Install required packages] *********************************************** 2025-05-13 22:40:18.562409 | orchestrator | skipping: [testbed-manager] 2025-05-13 22:40:18.562470 | orchestrator | 2025-05-13 22:40:18.562476 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-05-13 22:40:18.592976 | orchestrator | skipping: [testbed-manager] 2025-05-13 22:40:18.593036 | orchestrator | 2025-05-13 22:40:18.593042 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-05-13 22:40:18.621238 | orchestrator | skipping: [testbed-manager] 2025-05-13 22:40:18.621332 | orchestrator | 2025-05-13 22:40:18.621340 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-05-13 22:40:18.656722 | orchestrator | skipping: [testbed-manager] 2025-05-13 22:40:18.656788 | orchestrator | 2025-05-13 22:40:18.656799 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-05-13 22:40:18.690299 | orchestrator | skipping: [testbed-manager] 2025-05-13 22:40:18.690384 | orchestrator | 2025-05-13 22:40:18.690400 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-05-13 22:40:19.445616 | orchestrator | changed: [testbed-manager] 2025-05-13 22:40:19.445721 | orchestrator | 2025-05-13 22:40:19.445738 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-05-13 22:43:15.247554 | orchestrator | changed: [testbed-manager] 2025-05-13 22:43:15.247660 | orchestrator | 2025-05-13 22:43:15.247689 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-05-13 22:44:27.722778 | orchestrator | changed: [testbed-manager] 2025-05-13 22:44:27.722908 | orchestrator | 2025-05-13 22:44:27.722928 | orchestrator | TASK [Install required packages] *********************************************** 2025-05-13 22:44:46.806097 | orchestrator | changed: [testbed-manager] 2025-05-13 22:44:46.806182 | orchestrator | 2025-05-13 22:44:46.806221 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-05-13 22:44:54.877543 | orchestrator | changed: [testbed-manager] 2025-05-13 22:44:54.877623 | orchestrator | 2025-05-13 22:44:54.877644 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-05-13 22:44:54.928547 | orchestrator | ok: [testbed-manager] 2025-05-13 22:44:54.928643 | orchestrator | 2025-05-13 22:44:54.928661 | orchestrator | TASK [Get current user] ******************************************************** 2025-05-13 22:44:55.767954 | orchestrator | ok: [testbed-manager] 2025-05-13 22:44:55.768073 | orchestrator | 2025-05-13 22:44:55.768102 | orchestrator | TASK [Create venv directory] *************************************************** 2025-05-13 22:44:56.532666 | orchestrator | changed: [testbed-manager] 2025-05-13 22:44:56.533445 | orchestrator | 2025-05-13 22:44:56.533471 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-05-13 22:45:03.111647 | orchestrator | changed: [testbed-manager] 2025-05-13 22:45:03.111756 | orchestrator | 2025-05-13 22:45:03.111798 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-05-13 22:45:08.942932 | orchestrator | changed: [testbed-manager] 2025-05-13 22:45:08.943033 | orchestrator | 2025-05-13 22:45:08.943054 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-05-13 22:45:11.383903 | orchestrator | changed: [testbed-manager] 2025-05-13 22:45:11.384007 | orchestrator | 2025-05-13 22:45:11.384027 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-05-13 22:45:13.168478 | orchestrator | changed: [testbed-manager] 2025-05-13 22:45:13.168526 | orchestrator | 2025-05-13 22:45:13.168535 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-05-13 22:45:14.329292 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-05-13 22:45:14.330212 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-05-13 22:45:14.330249 | orchestrator | 2025-05-13 22:45:14.330263 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-05-13 22:45:14.375608 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-05-13 22:45:14.375660 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-05-13 22:45:14.375666 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-05-13 22:45:14.375671 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-05-13 22:45:17.549505 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-05-13 22:45:17.549611 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-05-13 22:45:17.549634 | orchestrator | 2025-05-13 22:45:17.549648 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-05-13 22:45:18.128350 | orchestrator | changed: [testbed-manager] 2025-05-13 22:45:18.128443 | orchestrator | 2025-05-13 22:45:18.128460 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-05-13 22:48:41.131793 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-05-13 22:48:41.131901 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-05-13 22:48:41.131917 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-05-13 22:48:41.131929 | orchestrator | 2025-05-13 22:48:41.131971 | orchestrator | TASK [Install local collections] *********************************************** 2025-05-13 22:48:43.630200 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-05-13 22:48:43.630240 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-05-13 22:48:43.630244 | orchestrator | 2025-05-13 22:48:43.630249 | orchestrator | PLAY [Create operator user] **************************************************** 2025-05-13 22:48:43.630254 | orchestrator | 2025-05-13 22:48:43.630258 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-13 22:48:45.083795 | orchestrator | ok: [testbed-manager] 2025-05-13 22:48:45.083879 | orchestrator | 2025-05-13 22:48:45.083892 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-05-13 22:48:45.116485 | orchestrator | ok: [testbed-manager] 2025-05-13 22:48:45.116546 | orchestrator | 2025-05-13 22:48:45.116552 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-05-13 22:48:45.181987 | orchestrator | ok: [testbed-manager] 2025-05-13 22:48:45.182068 | orchestrator | 2025-05-13 22:48:45.182074 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-05-13 22:48:46.003392 | orchestrator | changed: [testbed-manager] 2025-05-13 22:48:46.003518 | orchestrator | 2025-05-13 22:48:46.003544 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-05-13 22:48:46.739051 | orchestrator | changed: [testbed-manager] 2025-05-13 22:48:46.739117 | orchestrator | 2025-05-13 22:48:46.739125 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-05-13 22:48:48.026056 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-05-13 22:48:48.026118 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-05-13 22:48:48.026124 | orchestrator | 2025-05-13 22:48:48.026140 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-05-13 22:48:49.412378 | orchestrator | changed: [testbed-manager] 2025-05-13 22:48:49.412506 | orchestrator | 2025-05-13 22:48:49.412522 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-05-13 22:48:51.056247 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-05-13 22:48:51.056298 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-05-13 22:48:51.056308 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-05-13 22:48:51.056315 | orchestrator | 2025-05-13 22:48:51.056324 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-05-13 22:48:51.596440 | orchestrator | changed: [testbed-manager] 2025-05-13 22:48:51.596561 | orchestrator | 2025-05-13 22:48:51.596591 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-05-13 22:48:51.669481 | orchestrator | skipping: [testbed-manager] 2025-05-13 22:48:51.669552 | orchestrator | 2025-05-13 22:48:51.669560 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-05-13 22:48:52.553355 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-13 22:48:52.553430 | orchestrator | changed: [testbed-manager] 2025-05-13 22:48:52.553440 | orchestrator | 2025-05-13 22:48:52.553448 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-05-13 22:48:52.590471 | orchestrator | skipping: [testbed-manager] 2025-05-13 22:48:52.590542 | orchestrator | 2025-05-13 22:48:52.590550 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-05-13 22:48:52.632547 | orchestrator | skipping: [testbed-manager] 2025-05-13 22:48:52.632639 | orchestrator | 2025-05-13 22:48:52.632653 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-05-13 22:48:52.665557 | orchestrator | skipping: [testbed-manager] 2025-05-13 22:48:52.665639 | orchestrator | 2025-05-13 22:48:52.665653 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-05-13 22:48:52.725457 | orchestrator | skipping: [testbed-manager] 2025-05-13 22:48:52.725527 | orchestrator | 2025-05-13 22:48:52.725536 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-05-13 22:48:53.484841 | orchestrator | ok: [testbed-manager] 2025-05-13 22:48:53.484983 | orchestrator | 2025-05-13 22:48:53.485010 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-05-13 22:48:53.485031 | orchestrator | 2025-05-13 22:48:53.485049 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-13 22:48:54.901720 | orchestrator | ok: [testbed-manager] 2025-05-13 22:48:54.901818 | orchestrator | 2025-05-13 22:48:54.901834 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-05-13 22:48:55.891139 | orchestrator | changed: [testbed-manager] 2025-05-13 22:48:55.891982 | orchestrator | 2025-05-13 22:48:55.892016 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 22:48:55.892031 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-05-13 22:48:55.892044 | orchestrator | 2025-05-13 22:48:56.091105 | orchestrator | ok: Runtime: 0:08:42.488543 2025-05-13 22:48:56.108169 | 2025-05-13 22:48:56.108370 | TASK [Point out that the log in on the manager is now possible] 2025-05-13 22:48:56.153002 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-05-13 22:48:56.162232 | 2025-05-13 22:48:56.162404 | TASK [Point out that the following task takes some time and does not give any output] 2025-05-13 22:48:56.194755 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-05-13 22:48:56.202810 | 2025-05-13 22:48:56.202984 | TASK [Run manager part 1 + 2] 2025-05-13 22:48:57.131467 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-13 22:48:57.186497 | orchestrator | 2025-05-13 22:48:57.186549 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-05-13 22:48:57.186557 | orchestrator | 2025-05-13 22:48:57.186570 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-13 22:49:00.194231 | orchestrator | ok: [testbed-manager] 2025-05-13 22:49:00.194287 | orchestrator | 2025-05-13 22:49:00.194312 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-05-13 22:49:00.232744 | orchestrator | skipping: [testbed-manager] 2025-05-13 22:49:00.232795 | orchestrator | 2025-05-13 22:49:00.232804 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-05-13 22:49:00.274895 | orchestrator | ok: [testbed-manager] 2025-05-13 22:49:00.274982 | orchestrator | 2025-05-13 22:49:00.274999 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-05-13 22:49:00.328055 | orchestrator | ok: [testbed-manager] 2025-05-13 22:49:00.328108 | orchestrator | 2025-05-13 22:49:00.328117 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-05-13 22:49:00.415492 | orchestrator | ok: [testbed-manager] 2025-05-13 22:49:00.415554 | orchestrator | 2025-05-13 22:49:00.415566 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-05-13 22:49:00.479425 | orchestrator | ok: [testbed-manager] 2025-05-13 22:49:00.479482 | orchestrator | 2025-05-13 22:49:00.479494 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-05-13 22:49:00.523517 | orchestrator | included: /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-05-13 22:49:00.523563 | orchestrator | 2025-05-13 22:49:00.523568 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-05-13 22:49:01.239819 | orchestrator | ok: [testbed-manager] 2025-05-13 22:49:01.239874 | orchestrator | 2025-05-13 22:49:01.239883 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-05-13 22:49:01.290074 | orchestrator | skipping: [testbed-manager] 2025-05-13 22:49:01.290158 | orchestrator | 2025-05-13 22:49:01.290183 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-05-13 22:49:02.669784 | orchestrator | changed: [testbed-manager] 2025-05-13 22:49:02.669856 | orchestrator | 2025-05-13 22:49:02.669871 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-05-13 22:49:03.250544 | orchestrator | ok: [testbed-manager] 2025-05-13 22:49:03.250617 | orchestrator | 2025-05-13 22:49:03.250632 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-05-13 22:49:04.441504 | orchestrator | changed: [testbed-manager] 2025-05-13 22:49:04.441577 | orchestrator | 2025-05-13 22:49:04.441595 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-05-13 22:49:17.947867 | orchestrator | changed: [testbed-manager] 2025-05-13 22:49:17.947969 | orchestrator | 2025-05-13 22:49:17.947979 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-05-13 22:49:18.666500 | orchestrator | ok: [testbed-manager] 2025-05-13 22:49:18.666593 | orchestrator | 2025-05-13 22:49:18.666606 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-05-13 22:49:18.721095 | orchestrator | skipping: [testbed-manager] 2025-05-13 22:49:18.721200 | orchestrator | 2025-05-13 22:49:18.721219 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-05-13 22:49:19.761892 | orchestrator | changed: [testbed-manager] 2025-05-13 22:49:19.762051 | orchestrator | 2025-05-13 22:49:19.762069 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-05-13 22:49:20.766275 | orchestrator | changed: [testbed-manager] 2025-05-13 22:49:20.766379 | orchestrator | 2025-05-13 22:49:20.766407 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-05-13 22:49:21.363979 | orchestrator | changed: [testbed-manager] 2025-05-13 22:49:21.364035 | orchestrator | 2025-05-13 22:49:21.364049 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-05-13 22:49:21.406206 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-05-13 22:49:21.406331 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-05-13 22:49:21.406350 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-05-13 22:49:21.406363 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-05-13 22:49:23.324660 | orchestrator | changed: [testbed-manager] 2025-05-13 22:49:23.324758 | orchestrator | 2025-05-13 22:49:23.324775 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-05-13 22:49:32.520369 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-05-13 22:49:32.520522 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-05-13 22:49:32.520545 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-05-13 22:49:32.520558 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-05-13 22:49:32.520570 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-05-13 22:49:32.520582 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-05-13 22:49:32.520593 | orchestrator | 2025-05-13 22:49:32.520606 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-05-13 22:49:33.636950 | orchestrator | changed: [testbed-manager] 2025-05-13 22:49:33.637047 | orchestrator | 2025-05-13 22:49:33.637064 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-05-13 22:49:33.680968 | orchestrator | skipping: [testbed-manager] 2025-05-13 22:49:33.681010 | orchestrator | 2025-05-13 22:49:33.681017 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-05-13 22:49:36.875235 | orchestrator | changed: [testbed-manager] 2025-05-13 22:49:36.875306 | orchestrator | 2025-05-13 22:49:36.875322 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-05-13 22:49:36.914012 | orchestrator | skipping: [testbed-manager] 2025-05-13 22:49:36.914103 | orchestrator | 2025-05-13 22:49:36.914113 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-05-13 22:51:12.675091 | orchestrator | changed: [testbed-manager] 2025-05-13 22:51:12.675291 | orchestrator | 2025-05-13 22:51:12.675315 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-05-13 22:51:13.873306 | orchestrator | ok: [testbed-manager] 2025-05-13 22:51:13.873393 | orchestrator | 2025-05-13 22:51:13.873409 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 22:51:13.873421 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-05-13 22:51:13.873432 | orchestrator | 2025-05-13 22:51:14.339434 | orchestrator | ok: Runtime: 0:02:17.413507 2025-05-13 22:51:14.357408 | 2025-05-13 22:51:14.357555 | TASK [Reboot manager] 2025-05-13 22:51:15.894354 | orchestrator | ok: Runtime: 0:00:01.016503 2025-05-13 22:51:15.911150 | 2025-05-13 22:51:15.911323 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-05-13 22:51:29.899196 | orchestrator | ok 2025-05-13 22:51:29.910037 | 2025-05-13 22:51:29.910214 | TASK [Wait a little longer for the manager so that everything is ready] 2025-05-13 22:52:29.955469 | orchestrator | ok 2025-05-13 22:52:29.964179 | 2025-05-13 22:52:29.964305 | TASK [Deploy manager + bootstrap nodes] 2025-05-13 22:52:32.481497 | orchestrator | 2025-05-13 22:52:32.481744 | orchestrator | # DEPLOY MANAGER 2025-05-13 22:52:32.481770 | orchestrator | 2025-05-13 22:52:32.481784 | orchestrator | + set -e 2025-05-13 22:52:32.481797 | orchestrator | + echo 2025-05-13 22:52:32.481811 | orchestrator | + echo '# DEPLOY MANAGER' 2025-05-13 22:52:32.481824 | orchestrator | + echo 2025-05-13 22:52:32.481919 | orchestrator | + cat /opt/manager-vars.sh 2025-05-13 22:52:32.484958 | orchestrator | export NUMBER_OF_NODES=6 2025-05-13 22:52:32.484985 | orchestrator | 2025-05-13 22:52:32.484997 | orchestrator | export CEPH_VERSION=reef 2025-05-13 22:52:32.485010 | orchestrator | export CONFIGURATION_VERSION=main 2025-05-13 22:52:32.485022 | orchestrator | export MANAGER_VERSION=latest 2025-05-13 22:52:32.485044 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-05-13 22:52:32.485055 | orchestrator | 2025-05-13 22:52:32.485073 | orchestrator | export ARA=false 2025-05-13 22:52:32.485084 | orchestrator | export TEMPEST=false 2025-05-13 22:52:32.485101 | orchestrator | export IS_ZUUL=true 2025-05-13 22:52:32.485113 | orchestrator | 2025-05-13 22:52:32.485130 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.58 2025-05-13 22:52:32.485143 | orchestrator | export EXTERNAL_API=false 2025-05-13 22:52:32.485154 | orchestrator | 2025-05-13 22:52:32.485176 | orchestrator | export IMAGE_USER=ubuntu 2025-05-13 22:52:32.485187 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-05-13 22:52:32.485198 | orchestrator | 2025-05-13 22:52:32.485212 | orchestrator | export CEPH_STACK=ceph-ansible 2025-05-13 22:52:32.485229 | orchestrator | 2025-05-13 22:52:32.485240 | orchestrator | + echo 2025-05-13 22:52:32.485251 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-13 22:52:32.486075 | orchestrator | ++ export INTERACTIVE=false 2025-05-13 22:52:32.486095 | orchestrator | ++ INTERACTIVE=false 2025-05-13 22:52:32.486107 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-13 22:52:32.486117 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-13 22:52:32.486217 | orchestrator | + source /opt/manager-vars.sh 2025-05-13 22:52:32.486232 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-05-13 22:52:32.486243 | orchestrator | ++ NUMBER_OF_NODES=6 2025-05-13 22:52:32.486254 | orchestrator | ++ export CEPH_VERSION=reef 2025-05-13 22:52:32.486264 | orchestrator | ++ CEPH_VERSION=reef 2025-05-13 22:52:32.486279 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-05-13 22:52:32.486290 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-05-13 22:52:32.486301 | orchestrator | ++ export MANAGER_VERSION=latest 2025-05-13 22:52:32.486312 | orchestrator | ++ MANAGER_VERSION=latest 2025-05-13 22:52:32.486404 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-05-13 22:52:32.486418 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-05-13 22:52:32.486429 | orchestrator | ++ export ARA=false 2025-05-13 22:52:32.486450 | orchestrator | ++ ARA=false 2025-05-13 22:52:32.486461 | orchestrator | ++ export TEMPEST=false 2025-05-13 22:52:32.486472 | orchestrator | ++ TEMPEST=false 2025-05-13 22:52:32.486486 | orchestrator | ++ export IS_ZUUL=true 2025-05-13 22:52:32.486497 | orchestrator | ++ IS_ZUUL=true 2025-05-13 22:52:32.486508 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.58 2025-05-13 22:52:32.486519 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.58 2025-05-13 22:52:32.486585 | orchestrator | ++ export EXTERNAL_API=false 2025-05-13 22:52:32.486598 | orchestrator | ++ EXTERNAL_API=false 2025-05-13 22:52:32.486609 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-05-13 22:52:32.486646 | orchestrator | ++ IMAGE_USER=ubuntu 2025-05-13 22:52:32.486659 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-05-13 22:52:32.486724 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-05-13 22:52:32.486737 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-05-13 22:52:32.486749 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-05-13 22:52:32.487009 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-05-13 22:52:32.548527 | orchestrator | + docker version 2025-05-13 22:52:32.812373 | orchestrator | Client: Docker Engine - Community 2025-05-13 22:52:32.812509 | orchestrator | Version: 27.5.1 2025-05-13 22:52:32.812542 | orchestrator | API version: 1.47 2025-05-13 22:52:32.812562 | orchestrator | Go version: go1.22.11 2025-05-13 22:52:32.812583 | orchestrator | Git commit: 9f9e405 2025-05-13 22:52:32.812606 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-05-13 22:52:32.812626 | orchestrator | OS/Arch: linux/amd64 2025-05-13 22:52:32.812647 | orchestrator | Context: default 2025-05-13 22:52:32.812665 | orchestrator | 2025-05-13 22:52:32.812680 | orchestrator | Server: Docker Engine - Community 2025-05-13 22:52:32.812691 | orchestrator | Engine: 2025-05-13 22:52:32.812702 | orchestrator | Version: 27.5.1 2025-05-13 22:52:32.812713 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-05-13 22:52:32.812724 | orchestrator | Go version: go1.22.11 2025-05-13 22:52:32.812735 | orchestrator | Git commit: 4c9b3b0 2025-05-13 22:52:32.812780 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-05-13 22:52:32.812793 | orchestrator | OS/Arch: linux/amd64 2025-05-13 22:52:32.812803 | orchestrator | Experimental: false 2025-05-13 22:52:32.812814 | orchestrator | containerd: 2025-05-13 22:52:32.812825 | orchestrator | Version: 1.7.27 2025-05-13 22:52:32.812867 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-05-13 22:52:32.812881 | orchestrator | runc: 2025-05-13 22:52:32.812892 | orchestrator | Version: 1.2.5 2025-05-13 22:52:32.812904 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-05-13 22:52:32.812914 | orchestrator | docker-init: 2025-05-13 22:52:32.812925 | orchestrator | Version: 0.19.0 2025-05-13 22:52:32.812936 | orchestrator | GitCommit: de40ad0 2025-05-13 22:52:32.816283 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-05-13 22:52:32.824189 | orchestrator | + set -e 2025-05-13 22:52:32.824239 | orchestrator | + source /opt/manager-vars.sh 2025-05-13 22:52:32.824259 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-05-13 22:52:32.824277 | orchestrator | ++ NUMBER_OF_NODES=6 2025-05-13 22:52:32.824296 | orchestrator | ++ export CEPH_VERSION=reef 2025-05-13 22:52:32.824314 | orchestrator | ++ CEPH_VERSION=reef 2025-05-13 22:52:32.824333 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-05-13 22:52:32.824353 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-05-13 22:52:32.824373 | orchestrator | ++ export MANAGER_VERSION=latest 2025-05-13 22:52:32.824392 | orchestrator | ++ MANAGER_VERSION=latest 2025-05-13 22:52:32.824411 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-05-13 22:52:32.824430 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-05-13 22:52:32.824449 | orchestrator | ++ export ARA=false 2025-05-13 22:52:32.824468 | orchestrator | ++ ARA=false 2025-05-13 22:52:32.824486 | orchestrator | ++ export TEMPEST=false 2025-05-13 22:52:32.824505 | orchestrator | ++ TEMPEST=false 2025-05-13 22:52:32.824524 | orchestrator | ++ export IS_ZUUL=true 2025-05-13 22:52:32.824544 | orchestrator | ++ IS_ZUUL=true 2025-05-13 22:52:32.824563 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.58 2025-05-13 22:52:32.824582 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.58 2025-05-13 22:52:32.824600 | orchestrator | ++ export EXTERNAL_API=false 2025-05-13 22:52:32.824618 | orchestrator | ++ EXTERNAL_API=false 2025-05-13 22:52:32.824638 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-05-13 22:52:32.824657 | orchestrator | ++ IMAGE_USER=ubuntu 2025-05-13 22:52:32.824676 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-05-13 22:52:32.824694 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-05-13 22:52:32.824712 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-05-13 22:52:32.824730 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-05-13 22:52:32.824749 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-13 22:52:32.824768 | orchestrator | ++ export INTERACTIVE=false 2025-05-13 22:52:32.824787 | orchestrator | ++ INTERACTIVE=false 2025-05-13 22:52:32.824798 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-13 22:52:32.824809 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-13 22:52:32.824828 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-05-13 22:52:32.824867 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-05-13 22:52:32.824880 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2025-05-13 22:52:32.831488 | orchestrator | + set -e 2025-05-13 22:52:32.831552 | orchestrator | + VERSION=reef 2025-05-13 22:52:32.832476 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2025-05-13 22:52:32.838437 | orchestrator | + [[ -n ceph_version: reef ]] 2025-05-13 22:52:32.838470 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2025-05-13 22:52:32.845104 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2025-05-13 22:52:32.852183 | orchestrator | + set -e 2025-05-13 22:52:32.852215 | orchestrator | + VERSION=2024.2 2025-05-13 22:52:32.853141 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2025-05-13 22:52:32.856914 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2025-05-13 22:52:32.856962 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2025-05-13 22:52:32.862042 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-05-13 22:52:32.863041 | orchestrator | ++ semver latest 7.0.0 2025-05-13 22:52:32.930322 | orchestrator | + [[ -1 -ge 0 ]] 2025-05-13 22:52:32.930418 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-05-13 22:52:32.930435 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-05-13 22:52:32.930448 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-05-13 22:52:32.974115 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-05-13 22:52:32.975041 | orchestrator | + source /opt/venv/bin/activate 2025-05-13 22:52:32.976465 | orchestrator | ++ deactivate nondestructive 2025-05-13 22:52:32.976491 | orchestrator | ++ '[' -n '' ']' 2025-05-13 22:52:32.976502 | orchestrator | ++ '[' -n '' ']' 2025-05-13 22:52:32.976514 | orchestrator | ++ hash -r 2025-05-13 22:52:32.976531 | orchestrator | ++ '[' -n '' ']' 2025-05-13 22:52:32.976542 | orchestrator | ++ unset VIRTUAL_ENV 2025-05-13 22:52:32.976553 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-05-13 22:52:32.976564 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-05-13 22:52:32.976576 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-05-13 22:52:32.976587 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-05-13 22:52:32.976717 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-05-13 22:52:32.976733 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-05-13 22:52:32.976746 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-13 22:52:32.976762 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-13 22:52:32.976773 | orchestrator | ++ export PATH 2025-05-13 22:52:32.976907 | orchestrator | ++ '[' -n '' ']' 2025-05-13 22:52:32.976960 | orchestrator | ++ '[' -z '' ']' 2025-05-13 22:52:32.976980 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-05-13 22:52:32.976998 | orchestrator | ++ PS1='(venv) ' 2025-05-13 22:52:32.977016 | orchestrator | ++ export PS1 2025-05-13 22:52:32.977033 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-05-13 22:52:32.977051 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-05-13 22:52:32.977075 | orchestrator | ++ hash -r 2025-05-13 22:52:32.977405 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-05-13 22:52:34.282997 | orchestrator | 2025-05-13 22:52:34.283128 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-05-13 22:52:34.283146 | orchestrator | 2025-05-13 22:52:34.283202 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-05-13 22:52:34.855822 | orchestrator | ok: [testbed-manager] 2025-05-13 22:52:34.855977 | orchestrator | 2025-05-13 22:52:34.855997 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-05-13 22:52:35.831377 | orchestrator | changed: [testbed-manager] 2025-05-13 22:52:35.831493 | orchestrator | 2025-05-13 22:52:35.831510 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-05-13 22:52:35.831523 | orchestrator | 2025-05-13 22:52:35.831535 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-13 22:52:38.287521 | orchestrator | ok: [testbed-manager] 2025-05-13 22:52:38.287662 | orchestrator | 2025-05-13 22:52:38.287693 | orchestrator | TASK [Pull images] ************************************************************* 2025-05-13 22:52:44.133191 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/ara-server:1.7.2) 2025-05-13 22:52:44.133308 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/library/mariadb:11.7.2) 2025-05-13 22:52:44.133324 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/ceph-ansible:reef) 2025-05-13 22:52:44.133338 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/inventory-reconciler:latest) 2025-05-13 22:52:44.133350 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/kolla-ansible:2024.2) 2025-05-13 22:52:44.133362 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/library/redis:7.4.3-alpine) 2025-05-13 22:52:44.133373 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/netbox:v4.2.2) 2025-05-13 22:52:44.133384 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/osism-ansible:latest) 2025-05-13 22:52:44.133395 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/osism:latest) 2025-05-13 22:52:44.133405 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/library/postgres:16.9-alpine) 2025-05-13 22:52:44.133416 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/library/traefik:v3.4.0) 2025-05-13 22:52:44.133427 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/hashicorp/vault:1.19.3) 2025-05-13 22:52:44.133439 | orchestrator | 2025-05-13 22:52:44.133479 | orchestrator | TASK [Check status] ************************************************************ 2025-05-13 22:54:00.273403 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-05-13 22:54:00.273529 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (119 retries left). 2025-05-13 22:54:00.273549 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (118 retries left). 2025-05-13 22:54:00.273562 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (117 retries left). 2025-05-13 22:54:00.273590 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j770929351214.1544', 'results_file': '/home/dragon/.ansible_async/j770929351214.1544', 'changed': True, 'item': 'registry.osism.tech/osism/ara-server:1.7.2', 'ansible_loop_var': 'item'}) 2025-05-13 22:54:00.273613 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j977685525606.1569', 'results_file': '/home/dragon/.ansible_async/j977685525606.1569', 'changed': True, 'item': 'registry.osism.tech/dockerhub/library/mariadb:11.7.2', 'ansible_loop_var': 'item'}) 2025-05-13 22:54:00.273633 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-05-13 22:54:00.273647 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j961483632519.1594', 'results_file': '/home/dragon/.ansible_async/j961483632519.1594', 'changed': True, 'item': 'registry.osism.tech/osism/ceph-ansible:reef', 'ansible_loop_var': 'item'}) 2025-05-13 22:54:00.273662 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j55082117021.1634', 'results_file': '/home/dragon/.ansible_async/j55082117021.1634', 'changed': True, 'item': 'registry.osism.tech/osism/inventory-reconciler:latest', 'ansible_loop_var': 'item'}) 2025-05-13 22:54:00.273675 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-05-13 22:54:00.273698 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j384530175972.1659', 'results_file': '/home/dragon/.ansible_async/j384530175972.1659', 'changed': True, 'item': 'registry.osism.tech/osism/kolla-ansible:2024.2', 'ansible_loop_var': 'item'}) 2025-05-13 22:54:00.273712 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j902791509508.1691', 'results_file': '/home/dragon/.ansible_async/j902791509508.1691', 'changed': True, 'item': 'registry.osism.tech/dockerhub/library/redis:7.4.3-alpine', 'ansible_loop_var': 'item'}) 2025-05-13 22:54:00.273724 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-05-13 22:54:00.273737 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j738097154792.1731', 'results_file': '/home/dragon/.ansible_async/j738097154792.1731', 'changed': True, 'item': 'registry.osism.tech/osism/netbox:v4.2.2', 'ansible_loop_var': 'item'}) 2025-05-13 22:54:00.273750 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j765970139389.1763', 'results_file': '/home/dragon/.ansible_async/j765970139389.1763', 'changed': True, 'item': 'registry.osism.tech/osism/osism-ansible:latest', 'ansible_loop_var': 'item'}) 2025-05-13 22:54:00.273764 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j143384385865.1795', 'results_file': '/home/dragon/.ansible_async/j143384385865.1795', 'changed': True, 'item': 'registry.osism.tech/osism/osism:latest', 'ansible_loop_var': 'item'}) 2025-05-13 22:54:00.273777 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j350947891941.1829', 'results_file': '/home/dragon/.ansible_async/j350947891941.1829', 'changed': True, 'item': 'registry.osism.tech/dockerhub/library/postgres:16.9-alpine', 'ansible_loop_var': 'item'}) 2025-05-13 22:54:00.273791 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j508606552533.1856', 'results_file': '/home/dragon/.ansible_async/j508606552533.1856', 'changed': True, 'item': 'registry.osism.tech/dockerhub/library/traefik:v3.4.0', 'ansible_loop_var': 'item'}) 2025-05-13 22:54:00.273832 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j123539336589.1898', 'results_file': '/home/dragon/.ansible_async/j123539336589.1898', 'changed': True, 'item': 'registry.osism.tech/dockerhub/hashicorp/vault:1.19.3', 'ansible_loop_var': 'item'}) 2025-05-13 22:54:00.273845 | orchestrator | 2025-05-13 22:54:00.273860 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-05-13 22:54:00.333670 | orchestrator | ok: [testbed-manager] 2025-05-13 22:54:00.333763 | orchestrator | 2025-05-13 22:54:00.333776 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-05-13 22:54:00.864674 | orchestrator | changed: [testbed-manager] 2025-05-13 22:54:00.864799 | orchestrator | 2025-05-13 22:54:00.864816 | orchestrator | TASK [Add netbox_postgres_volume_type parameter] ******************************* 2025-05-13 22:54:01.201076 | orchestrator | changed: [testbed-manager] 2025-05-13 22:54:01.201177 | orchestrator | 2025-05-13 22:54:01.201256 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-05-13 22:54:01.543669 | orchestrator | changed: [testbed-manager] 2025-05-13 22:54:01.543769 | orchestrator | 2025-05-13 22:54:01.543785 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-05-13 22:54:01.607480 | orchestrator | skipping: [testbed-manager] 2025-05-13 22:54:01.607568 | orchestrator | 2025-05-13 22:54:01.607580 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-05-13 22:54:02.054817 | orchestrator | ok: [testbed-manager] 2025-05-13 22:54:02.054919 | orchestrator | 2025-05-13 22:54:02.054934 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-05-13 22:54:02.160286 | orchestrator | skipping: [testbed-manager] 2025-05-13 22:54:02.160437 | orchestrator | 2025-05-13 22:54:02.160464 | orchestrator | PLAY [Apply role traefik & netbox] ********************************************* 2025-05-13 22:54:02.160480 | orchestrator | 2025-05-13 22:54:02.160491 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-13 22:54:03.845335 | orchestrator | ok: [testbed-manager] 2025-05-13 22:54:03.845441 | orchestrator | 2025-05-13 22:54:03.845456 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-05-13 22:54:03.949030 | orchestrator | included: osism.services.traefik for testbed-manager 2025-05-13 22:54:03.949153 | orchestrator | 2025-05-13 22:54:03.949179 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-05-13 22:54:04.004351 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-05-13 22:54:04.004457 | orchestrator | 2025-05-13 22:54:04.004474 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-05-13 22:54:05.007866 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-05-13 22:54:05.007964 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-05-13 22:54:05.007976 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-05-13 22:54:05.007991 | orchestrator | 2025-05-13 22:54:05.008002 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-05-13 22:54:06.609905 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-05-13 22:54:06.610124 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-05-13 22:54:06.610157 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-05-13 22:54:06.610177 | orchestrator | 2025-05-13 22:54:06.610224 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-05-13 22:54:07.213787 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-13 22:54:07.213891 | orchestrator | changed: [testbed-manager] 2025-05-13 22:54:07.213907 | orchestrator | 2025-05-13 22:54:07.213920 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-05-13 22:54:07.855971 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-13 22:54:07.856078 | orchestrator | changed: [testbed-manager] 2025-05-13 22:54:07.856119 | orchestrator | 2025-05-13 22:54:07.856132 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-05-13 22:54:07.910779 | orchestrator | skipping: [testbed-manager] 2025-05-13 22:54:07.910871 | orchestrator | 2025-05-13 22:54:07.910885 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-05-13 22:54:08.279832 | orchestrator | ok: [testbed-manager] 2025-05-13 22:54:08.279953 | orchestrator | 2025-05-13 22:54:08.279978 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-05-13 22:54:08.341070 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-05-13 22:54:08.341168 | orchestrator | 2025-05-13 22:54:08.341184 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-05-13 22:54:09.427486 | orchestrator | changed: [testbed-manager] 2025-05-13 22:54:09.427591 | orchestrator | 2025-05-13 22:54:09.427608 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-05-13 22:54:10.440597 | orchestrator | changed: [testbed-manager] 2025-05-13 22:54:10.440703 | orchestrator | 2025-05-13 22:54:10.440720 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-05-13 22:54:13.789623 | orchestrator | changed: [testbed-manager] 2025-05-13 22:54:13.789735 | orchestrator | 2025-05-13 22:54:13.789752 | orchestrator | TASK [Apply netbox role] ******************************************************* 2025-05-13 22:54:13.922991 | orchestrator | included: osism.services.netbox for testbed-manager 2025-05-13 22:54:13.923087 | orchestrator | 2025-05-13 22:54:13.923102 | orchestrator | TASK [osism.services.netbox : Include install tasks] *************************** 2025-05-13 22:54:14.010109 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/install-Debian-family.yml for testbed-manager 2025-05-13 22:54:14.010204 | orchestrator | 2025-05-13 22:54:14.010219 | orchestrator | TASK [osism.services.netbox : Install required packages] *********************** 2025-05-13 22:54:16.826809 | orchestrator | ok: [testbed-manager] 2025-05-13 22:54:16.826915 | orchestrator | 2025-05-13 22:54:16.826932 | orchestrator | TASK [osism.services.netbox : Include config tasks] **************************** 2025-05-13 22:54:16.946625 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config.yml for testbed-manager 2025-05-13 22:54:16.946724 | orchestrator | 2025-05-13 22:54:16.946739 | orchestrator | TASK [osism.services.netbox : Create required directories] ********************* 2025-05-13 22:54:18.053453 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox) 2025-05-13 22:54:18.053567 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration) 2025-05-13 22:54:18.053583 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/secrets) 2025-05-13 22:54:18.053594 | orchestrator | 2025-05-13 22:54:18.053639 | orchestrator | TASK [osism.services.netbox : Include postgres config tasks] ******************* 2025-05-13 22:54:18.133868 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config-postgres.yml for testbed-manager 2025-05-13 22:54:18.133969 | orchestrator | 2025-05-13 22:54:18.133985 | orchestrator | TASK [osism.services.netbox : Copy postgres environment files] ***************** 2025-05-13 22:54:18.752550 | orchestrator | changed: [testbed-manager] => (item=postgres) 2025-05-13 22:54:18.752641 | orchestrator | 2025-05-13 22:54:18.752657 | orchestrator | TASK [osism.services.netbox : Copy postgres configuration file] **************** 2025-05-13 22:54:19.405681 | orchestrator | changed: [testbed-manager] 2025-05-13 22:54:19.405789 | orchestrator | 2025-05-13 22:54:19.405805 | orchestrator | TASK [osism.services.netbox : Copy secret files] ******************************* 2025-05-13 22:54:20.032653 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-13 22:54:20.032757 | orchestrator | changed: [testbed-manager] 2025-05-13 22:54:20.032771 | orchestrator | 2025-05-13 22:54:20.032782 | orchestrator | TASK [osism.services.netbox : Create docker-entrypoint-initdb.d directory] ***** 2025-05-13 22:54:20.467756 | orchestrator | changed: [testbed-manager] 2025-05-13 22:54:20.467861 | orchestrator | 2025-05-13 22:54:20.467878 | orchestrator | TASK [osism.services.netbox : Check if init.sql file exists] ******************* 2025-05-13 22:54:20.839311 | orchestrator | ok: [testbed-manager] 2025-05-13 22:54:20.839414 | orchestrator | 2025-05-13 22:54:20.839441 | orchestrator | TASK [osism.services.netbox : Copy init.sql file] ****************************** 2025-05-13 22:54:20.890397 | orchestrator | skipping: [testbed-manager] 2025-05-13 22:54:20.890504 | orchestrator | 2025-05-13 22:54:20.890520 | orchestrator | TASK [osism.services.netbox : Create init-netbox-database.sh script] *********** 2025-05-13 22:54:21.535464 | orchestrator | changed: [testbed-manager] 2025-05-13 22:54:21.535594 | orchestrator | 2025-05-13 22:54:21.535612 | orchestrator | TASK [osism.services.netbox : Include config tasks] **************************** 2025-05-13 22:54:21.604156 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config-netbox.yml for testbed-manager 2025-05-13 22:54:21.604266 | orchestrator | 2025-05-13 22:54:21.604282 | orchestrator | TASK [osism.services.netbox : Create directories required by netbox] *********** 2025-05-13 22:54:22.388137 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration/initializers) 2025-05-13 22:54:22.388263 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration/startup-scripts) 2025-05-13 22:54:22.388278 | orchestrator | 2025-05-13 22:54:22.388311 | orchestrator | TASK [osism.services.netbox : Copy netbox environment files] ******************* 2025-05-13 22:54:23.070117 | orchestrator | changed: [testbed-manager] => (item=netbox) 2025-05-13 22:54:23.070250 | orchestrator | 2025-05-13 22:54:23.070270 | orchestrator | TASK [osism.services.netbox : Copy netbox configuration file] ****************** 2025-05-13 22:54:23.769540 | orchestrator | changed: [testbed-manager] 2025-05-13 22:54:23.769693 | orchestrator | 2025-05-13 22:54:23.769710 | orchestrator | TASK [osism.services.netbox : Copy nginx unit configuration file (<= 1.26)] **** 2025-05-13 22:54:23.810561 | orchestrator | skipping: [testbed-manager] 2025-05-13 22:54:23.810677 | orchestrator | 2025-05-13 22:54:23.810696 | orchestrator | TASK [osism.services.netbox : Copy nginx unit configuration file (> 1.26)] ***** 2025-05-13 22:54:24.472764 | orchestrator | changed: [testbed-manager] 2025-05-13 22:54:24.472875 | orchestrator | 2025-05-13 22:54:24.472892 | orchestrator | TASK [osism.services.netbox : Copy secret files] ******************************* 2025-05-13 22:54:26.383938 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-13 22:54:26.384039 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-13 22:54:26.384052 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-13 22:54:26.384062 | orchestrator | changed: [testbed-manager] 2025-05-13 22:54:26.384073 | orchestrator | 2025-05-13 22:54:26.384084 | orchestrator | TASK [osism.services.netbox : Deploy initializers for netbox] ****************** 2025-05-13 22:54:32.461112 | orchestrator | changed: [testbed-manager] => (item=custom_fields) 2025-05-13 22:54:32.461205 | orchestrator | changed: [testbed-manager] => (item=device_roles) 2025-05-13 22:54:32.461216 | orchestrator | changed: [testbed-manager] => (item=device_types) 2025-05-13 22:54:32.461224 | orchestrator | changed: [testbed-manager] => (item=groups) 2025-05-13 22:54:32.461231 | orchestrator | changed: [testbed-manager] => (item=manufacturers) 2025-05-13 22:54:32.461238 | orchestrator | changed: [testbed-manager] => (item=object_permissions) 2025-05-13 22:54:32.461246 | orchestrator | changed: [testbed-manager] => (item=prefix_vlan_roles) 2025-05-13 22:54:32.461252 | orchestrator | changed: [testbed-manager] => (item=sites) 2025-05-13 22:54:32.461260 | orchestrator | changed: [testbed-manager] => (item=tags) 2025-05-13 22:54:32.461267 | orchestrator | changed: [testbed-manager] => (item=users) 2025-05-13 22:54:32.461274 | orchestrator | 2025-05-13 22:54:32.461281 | orchestrator | TASK [osism.services.netbox : Deploy startup scripts for netbox] *************** 2025-05-13 22:54:33.110345 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/files/startup-scripts/270_tags.py) 2025-05-13 22:54:33.110480 | orchestrator | 2025-05-13 22:54:33.110498 | orchestrator | TASK [osism.services.netbox : Include service tasks] *************************** 2025-05-13 22:54:33.187007 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/service.yml for testbed-manager 2025-05-13 22:54:33.187104 | orchestrator | 2025-05-13 22:54:33.187116 | orchestrator | TASK [osism.services.netbox : Copy netbox systemd unit file] ******************* 2025-05-13 22:54:33.904724 | orchestrator | changed: [testbed-manager] 2025-05-13 22:54:33.904825 | orchestrator | 2025-05-13 22:54:33.904839 | orchestrator | TASK [osism.services.netbox : Create traefik external network] ***************** 2025-05-13 22:54:34.529151 | orchestrator | ok: [testbed-manager] 2025-05-13 22:54:34.529274 | orchestrator | 2025-05-13 22:54:34.529291 | orchestrator | TASK [osism.services.netbox : Copy docker-compose.yml file] ******************** 2025-05-13 22:54:35.268934 | orchestrator | changed: [testbed-manager] 2025-05-13 22:54:35.269043 | orchestrator | 2025-05-13 22:54:35.269063 | orchestrator | TASK [osism.services.netbox : Pull container images] *************************** 2025-05-13 22:54:37.629543 | orchestrator | ok: [testbed-manager] 2025-05-13 22:54:37.629654 | orchestrator | 2025-05-13 22:54:37.629669 | orchestrator | TASK [osism.services.netbox : Stop and disable old service docker-compose@netbox] *** 2025-05-13 22:54:38.620777 | orchestrator | ok: [testbed-manager] 2025-05-13 22:54:38.620899 | orchestrator | 2025-05-13 22:54:38.620928 | orchestrator | TASK [osism.services.netbox : Manage netbox service] *************************** 2025-05-13 22:55:00.848709 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage netbox service (10 retries left). 2025-05-13 22:55:00.848857 | orchestrator | ok: [testbed-manager] 2025-05-13 22:55:00.848881 | orchestrator | 2025-05-13 22:55:00.848901 | orchestrator | TASK [osism.services.netbox : Register that netbox service was started] ******** 2025-05-13 22:55:00.911273 | orchestrator | skipping: [testbed-manager] 2025-05-13 22:55:00.911377 | orchestrator | 2025-05-13 22:55:00.911394 | orchestrator | TASK [osism.services.netbox : Flush handlers] ********************************** 2025-05-13 22:55:00.911407 | orchestrator | 2025-05-13 22:55:00.911419 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-05-13 22:55:00.962153 | orchestrator | skipping: [testbed-manager] 2025-05-13 22:55:00.962244 | orchestrator | 2025-05-13 22:55:00.962258 | orchestrator | RUNNING HANDLER [osism.services.netbox : Restart netbox service] *************** 2025-05-13 22:55:01.048198 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/restart-service.yml for testbed-manager 2025-05-13 22:55:01.048272 | orchestrator | 2025-05-13 22:55:01.048280 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres container] ****** 2025-05-13 22:55:01.943111 | orchestrator | ok: [testbed-manager] 2025-05-13 22:55:01.943222 | orchestrator | 2025-05-13 22:55:01.943240 | orchestrator | RUNNING HANDLER [osism.services.netbox : Set postgres container version fact] *** 2025-05-13 22:55:02.019231 | orchestrator | ok: [testbed-manager] 2025-05-13 22:55:02.019328 | orchestrator | 2025-05-13 22:55:02.019343 | orchestrator | RUNNING HANDLER [osism.services.netbox : Print major version of postgres container] *** 2025-05-13 22:55:02.067602 | orchestrator | ok: [testbed-manager] => { 2025-05-13 22:55:02.067684 | orchestrator | "msg": "The major version of the running postgres container is 16" 2025-05-13 22:55:02.067697 | orchestrator | } 2025-05-13 22:55:02.067709 | orchestrator | 2025-05-13 22:55:02.067721 | orchestrator | RUNNING HANDLER [osism.services.netbox : Pull postgres image] ****************** 2025-05-13 22:55:02.709197 | orchestrator | ok: [testbed-manager] 2025-05-13 22:55:02.709299 | orchestrator | 2025-05-13 22:55:02.709315 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres image] ********** 2025-05-13 22:55:03.699610 | orchestrator | ok: [testbed-manager] 2025-05-13 22:55:03.699714 | orchestrator | 2025-05-13 22:55:03.699744 | orchestrator | RUNNING HANDLER [osism.services.netbox : Set postgres image version fact] ****** 2025-05-13 22:55:03.773244 | orchestrator | ok: [testbed-manager] 2025-05-13 22:55:03.773379 | orchestrator | 2025-05-13 22:55:03.773398 | orchestrator | RUNNING HANDLER [osism.services.netbox : Print major version of postgres image] *** 2025-05-13 22:55:03.829914 | orchestrator | ok: [testbed-manager] => { 2025-05-13 22:55:03.830112 | orchestrator | "msg": "The major version of the postgres image is 16" 2025-05-13 22:55:03.830154 | orchestrator | } 2025-05-13 22:55:03.830178 | orchestrator | 2025-05-13 22:55:03.830198 | orchestrator | RUNNING HANDLER [osism.services.netbox : Stop netbox service] ****************** 2025-05-13 22:55:03.900051 | orchestrator | skipping: [testbed-manager] 2025-05-13 22:55:03.900150 | orchestrator | 2025-05-13 22:55:03.900165 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for netbox service to stop] ****** 2025-05-13 22:55:03.965356 | orchestrator | skipping: [testbed-manager] 2025-05-13 22:55:03.965449 | orchestrator | 2025-05-13 22:55:03.965461 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres volume] ********* 2025-05-13 22:55:04.040232 | orchestrator | skipping: [testbed-manager] 2025-05-13 22:55:04.040332 | orchestrator | 2025-05-13 22:55:04.040347 | orchestrator | RUNNING HANDLER [osism.services.netbox : Upgrade postgres database] ************ 2025-05-13 22:55:04.209061 | orchestrator | skipping: [testbed-manager] 2025-05-13 22:55:04.209166 | orchestrator | 2025-05-13 22:55:04.209181 | orchestrator | RUNNING HANDLER [osism.services.netbox : Remove netbox-pgautoupgrade container] *** 2025-05-13 22:55:04.262581 | orchestrator | skipping: [testbed-manager] 2025-05-13 22:55:04.262717 | orchestrator | 2025-05-13 22:55:04.262739 | orchestrator | RUNNING HANDLER [osism.services.netbox : Start netbox service] ***************** 2025-05-13 22:55:04.323137 | orchestrator | skipping: [testbed-manager] 2025-05-13 22:55:04.323239 | orchestrator | 2025-05-13 22:55:04.323255 | orchestrator | RUNNING HANDLER [osism.services.netbox : Restart netbox service] *************** 2025-05-13 22:55:05.640487 | orchestrator | changed: [testbed-manager] 2025-05-13 22:55:05.640640 | orchestrator | 2025-05-13 22:55:05.640657 | orchestrator | RUNNING HANDLER [osism.services.netbox : Register that netbox service was started] *** 2025-05-13 22:55:05.718878 | orchestrator | ok: [testbed-manager] 2025-05-13 22:55:05.718973 | orchestrator | 2025-05-13 22:55:05.718988 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for netbox service to start] ***** 2025-05-13 22:56:05.773807 | orchestrator | Pausing for 60 seconds 2025-05-13 22:56:05.773929 | orchestrator | changed: [testbed-manager] 2025-05-13 22:56:05.773946 | orchestrator | 2025-05-13 22:56:05.773960 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for an healthy netbox service] *** 2025-05-13 22:56:05.839414 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/wait-for-healthy-service.yml for testbed-manager 2025-05-13 22:56:05.839517 | orchestrator | 2025-05-13 22:56:05.839534 | orchestrator | RUNNING HANDLER [osism.services.netbox : Check that all containers are in a good state] *** 2025-05-13 23:00:07.456418 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (60 retries left). 2025-05-13 23:00:07.456534 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (59 retries left). 2025-05-13 23:00:07.456546 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (58 retries left). 2025-05-13 23:00:07.456553 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (57 retries left). 2025-05-13 23:00:07.456561 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (56 retries left). 2025-05-13 23:00:07.456568 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (55 retries left). 2025-05-13 23:00:07.456574 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (54 retries left). 2025-05-13 23:00:07.456582 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (53 retries left). 2025-05-13 23:00:07.456588 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (52 retries left). 2025-05-13 23:00:07.456595 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (51 retries left). 2025-05-13 23:00:07.456603 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (50 retries left). 2025-05-13 23:00:07.456660 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (49 retries left). 2025-05-13 23:00:07.456667 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (48 retries left). 2025-05-13 23:00:07.456675 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (47 retries left). 2025-05-13 23:00:07.456682 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (46 retries left). 2025-05-13 23:00:07.456690 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (45 retries left). 2025-05-13 23:00:07.456698 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (44 retries left). 2025-05-13 23:00:07.456720 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (43 retries left). 2025-05-13 23:00:07.456727 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (42 retries left). 2025-05-13 23:00:07.456734 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (41 retries left). 2025-05-13 23:00:07.456760 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (40 retries left). 2025-05-13 23:00:07.456768 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (39 retries left). 2025-05-13 23:00:07.456775 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (38 retries left). 2025-05-13 23:00:07.456782 | orchestrator | changed: [testbed-manager] 2025-05-13 23:00:07.456791 | orchestrator | 2025-05-13 23:00:07.456798 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-05-13 23:00:07.456806 | orchestrator | 2025-05-13 23:00:07.456813 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-13 23:00:09.680501 | orchestrator | ok: [testbed-manager] 2025-05-13 23:00:09.680674 | orchestrator | 2025-05-13 23:00:09.680695 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-05-13 23:00:09.818748 | orchestrator | included: osism.services.manager for testbed-manager 2025-05-13 23:00:09.818858 | orchestrator | 2025-05-13 23:00:09.818875 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-05-13 23:00:09.892767 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-05-13 23:00:09.892866 | orchestrator | 2025-05-13 23:00:09.892880 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-05-13 23:00:11.835267 | orchestrator | ok: [testbed-manager] 2025-05-13 23:00:11.835372 | orchestrator | 2025-05-13 23:00:11.835388 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-05-13 23:00:11.895880 | orchestrator | ok: [testbed-manager] 2025-05-13 23:00:11.895982 | orchestrator | 2025-05-13 23:00:11.895997 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-05-13 23:00:11.996145 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-05-13 23:00:11.996278 | orchestrator | 2025-05-13 23:00:11.996295 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-05-13 23:00:14.920340 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-05-13 23:00:14.920454 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-05-13 23:00:14.920469 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-05-13 23:00:14.920481 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-05-13 23:00:14.920493 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-05-13 23:00:14.920509 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-05-13 23:00:14.920520 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-05-13 23:00:14.920532 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-05-13 23:00:14.920543 | orchestrator | 2025-05-13 23:00:14.920556 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-05-13 23:00:15.632038 | orchestrator | changed: [testbed-manager] 2025-05-13 23:00:15.632174 | orchestrator | 2025-05-13 23:00:15.632193 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-05-13 23:00:15.726513 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-05-13 23:00:15.726678 | orchestrator | 2025-05-13 23:00:15.726696 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-05-13 23:00:16.981489 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-05-13 23:00:16.981588 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-05-13 23:00:16.981645 | orchestrator | 2025-05-13 23:00:16.981660 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-05-13 23:00:17.694737 | orchestrator | changed: [testbed-manager] 2025-05-13 23:00:17.694842 | orchestrator | 2025-05-13 23:00:17.694861 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-05-13 23:00:17.753375 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:00:17.753466 | orchestrator | 2025-05-13 23:00:17.753511 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-05-13 23:00:17.816895 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-05-13 23:00:17.816985 | orchestrator | 2025-05-13 23:00:17.817000 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-05-13 23:00:19.121118 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-13 23:00:19.121231 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-13 23:00:19.121248 | orchestrator | changed: [testbed-manager] 2025-05-13 23:00:19.121261 | orchestrator | 2025-05-13 23:00:19.121273 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-05-13 23:00:19.720141 | orchestrator | changed: [testbed-manager] 2025-05-13 23:00:19.720274 | orchestrator | 2025-05-13 23:00:19.720293 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-05-13 23:00:19.803590 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-netbox.yml for testbed-manager 2025-05-13 23:00:19.803705 | orchestrator | 2025-05-13 23:00:19.803719 | orchestrator | TASK [osism.services.manager : Copy secret files] ****************************** 2025-05-13 23:00:20.893287 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-13 23:00:20.893400 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-13 23:00:20.893416 | orchestrator | changed: [testbed-manager] 2025-05-13 23:00:20.893429 | orchestrator | 2025-05-13 23:00:20.893442 | orchestrator | TASK [osism.services.manager : Copy netbox environment file] ******************* 2025-05-13 23:00:21.524336 | orchestrator | changed: [testbed-manager] 2025-05-13 23:00:21.524443 | orchestrator | 2025-05-13 23:00:21.524459 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-05-13 23:00:21.621897 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-05-13 23:00:21.621998 | orchestrator | 2025-05-13 23:00:21.622013 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-05-13 23:00:22.175272 | orchestrator | changed: [testbed-manager] 2025-05-13 23:00:22.175376 | orchestrator | 2025-05-13 23:00:22.175392 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-05-13 23:00:22.579132 | orchestrator | changed: [testbed-manager] 2025-05-13 23:00:22.579258 | orchestrator | 2025-05-13 23:00:22.579276 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-05-13 23:00:23.831711 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-05-13 23:00:23.831817 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-05-13 23:00:23.831833 | orchestrator | 2025-05-13 23:00:23.831847 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-05-13 23:00:24.468196 | orchestrator | changed: [testbed-manager] 2025-05-13 23:00:24.468331 | orchestrator | 2025-05-13 23:00:24.468348 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-05-13 23:00:24.867766 | orchestrator | ok: [testbed-manager] 2025-05-13 23:00:24.867875 | orchestrator | 2025-05-13 23:00:24.867891 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-05-13 23:00:25.258488 | orchestrator | changed: [testbed-manager] 2025-05-13 23:00:25.258589 | orchestrator | 2025-05-13 23:00:25.258671 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-05-13 23:00:25.311590 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:00:25.311714 | orchestrator | 2025-05-13 23:00:25.311734 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-05-13 23:00:25.405241 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-05-13 23:00:25.405353 | orchestrator | 2025-05-13 23:00:25.405378 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-05-13 23:00:25.475734 | orchestrator | ok: [testbed-manager] 2025-05-13 23:00:25.475830 | orchestrator | 2025-05-13 23:00:25.475846 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-05-13 23:00:27.495178 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-05-13 23:00:27.495294 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-05-13 23:00:27.495303 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-05-13 23:00:27.495308 | orchestrator | 2025-05-13 23:00:27.495314 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-05-13 23:00:28.349418 | orchestrator | changed: [testbed-manager] 2025-05-13 23:00:28.349523 | orchestrator | 2025-05-13 23:00:28.349540 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-05-13 23:00:29.075278 | orchestrator | changed: [testbed-manager] 2025-05-13 23:00:29.075393 | orchestrator | 2025-05-13 23:00:29.075410 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-05-13 23:00:29.794318 | orchestrator | changed: [testbed-manager] 2025-05-13 23:00:29.794429 | orchestrator | 2025-05-13 23:00:29.794447 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-05-13 23:00:29.882053 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-05-13 23:00:29.882132 | orchestrator | 2025-05-13 23:00:29.882138 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-05-13 23:00:29.940915 | orchestrator | ok: [testbed-manager] 2025-05-13 23:00:29.941021 | orchestrator | 2025-05-13 23:00:29.941036 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-05-13 23:00:30.662551 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-05-13 23:00:30.662679 | orchestrator | 2025-05-13 23:00:30.662716 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-05-13 23:00:30.759549 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-05-13 23:00:30.759712 | orchestrator | 2025-05-13 23:00:30.759727 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-05-13 23:00:31.509098 | orchestrator | changed: [testbed-manager] 2025-05-13 23:00:31.509203 | orchestrator | 2025-05-13 23:00:31.509219 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-05-13 23:00:32.153350 | orchestrator | ok: [testbed-manager] 2025-05-13 23:00:32.153425 | orchestrator | 2025-05-13 23:00:32.153431 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-05-13 23:00:32.217112 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:00:32.217193 | orchestrator | 2025-05-13 23:00:32.217204 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-05-13 23:00:32.282167 | orchestrator | ok: [testbed-manager] 2025-05-13 23:00:32.282263 | orchestrator | 2025-05-13 23:00:32.282278 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-05-13 23:00:33.136708 | orchestrator | changed: [testbed-manager] 2025-05-13 23:00:33.136817 | orchestrator | 2025-05-13 23:00:33.136833 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-05-13 23:01:16.883409 | orchestrator | changed: [testbed-manager] 2025-05-13 23:01:16.883526 | orchestrator | 2025-05-13 23:01:16.883543 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-05-13 23:01:17.606845 | orchestrator | ok: [testbed-manager] 2025-05-13 23:01:17.606949 | orchestrator | 2025-05-13 23:01:17.606965 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-05-13 23:01:20.507340 | orchestrator | changed: [testbed-manager] 2025-05-13 23:01:20.507438 | orchestrator | 2025-05-13 23:01:20.507451 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-05-13 23:01:20.572802 | orchestrator | ok: [testbed-manager] 2025-05-13 23:01:20.572938 | orchestrator | 2025-05-13 23:01:20.572965 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-05-13 23:01:20.572986 | orchestrator | 2025-05-13 23:01:20.573005 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-05-13 23:01:20.630272 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:01:20.630367 | orchestrator | 2025-05-13 23:01:20.630381 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-05-13 23:02:20.678076 | orchestrator | Pausing for 60 seconds 2025-05-13 23:02:20.678190 | orchestrator | changed: [testbed-manager] 2025-05-13 23:02:20.678198 | orchestrator | 2025-05-13 23:02:20.678204 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-05-13 23:02:25.641237 | orchestrator | changed: [testbed-manager] 2025-05-13 23:02:25.641353 | orchestrator | 2025-05-13 23:02:25.641369 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-05-13 23:03:07.254193 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-05-13 23:03:07.254338 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-05-13 23:03:07.254360 | orchestrator | changed: [testbed-manager] 2025-05-13 23:03:07.254374 | orchestrator | 2025-05-13 23:03:07.254386 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-05-13 23:03:16.714257 | orchestrator | changed: [testbed-manager] 2025-05-13 23:03:16.714395 | orchestrator | 2025-05-13 23:03:16.714422 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-05-13 23:03:16.813916 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-05-13 23:03:16.814074 | orchestrator | 2025-05-13 23:03:16.814093 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-05-13 23:03:16.814106 | orchestrator | 2025-05-13 23:03:16.814117 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-05-13 23:03:16.881077 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:03:16.881176 | orchestrator | 2025-05-13 23:03:16.881192 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 23:03:16.881205 | orchestrator | testbed-manager : ok=109 changed=57 unreachable=0 failed=0 skipped=18 rescued=0 ignored=0 2025-05-13 23:03:16.881217 | orchestrator | 2025-05-13 23:03:17.019209 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-05-13 23:03:17.019321 | orchestrator | + deactivate 2025-05-13 23:03:17.019338 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-05-13 23:03:17.019352 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-13 23:03:17.019363 | orchestrator | + export PATH 2025-05-13 23:03:17.019374 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-05-13 23:03:17.019386 | orchestrator | + '[' -n '' ']' 2025-05-13 23:03:17.019397 | orchestrator | + hash -r 2025-05-13 23:03:17.019408 | orchestrator | + '[' -n '' ']' 2025-05-13 23:03:17.019419 | orchestrator | + unset VIRTUAL_ENV 2025-05-13 23:03:17.019430 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-05-13 23:03:17.019441 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-05-13 23:03:17.019502 | orchestrator | + unset -f deactivate 2025-05-13 23:03:17.019516 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-05-13 23:03:17.026132 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-05-13 23:03:17.026188 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-05-13 23:03:17.026213 | orchestrator | + local max_attempts=60 2025-05-13 23:03:17.026235 | orchestrator | + local name=ceph-ansible 2025-05-13 23:03:17.026257 | orchestrator | + local attempt_num=1 2025-05-13 23:03:17.027101 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-05-13 23:03:17.066362 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-13 23:03:17.066496 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-05-13 23:03:17.066514 | orchestrator | + local max_attempts=60 2025-05-13 23:03:17.066526 | orchestrator | + local name=kolla-ansible 2025-05-13 23:03:17.066537 | orchestrator | + local attempt_num=1 2025-05-13 23:03:17.066810 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-05-13 23:03:17.100856 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-13 23:03:17.100939 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-05-13 23:03:17.100948 | orchestrator | + local max_attempts=60 2025-05-13 23:03:17.100977 | orchestrator | + local name=osism-ansible 2025-05-13 23:03:17.100985 | orchestrator | + local attempt_num=1 2025-05-13 23:03:17.101563 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-05-13 23:03:17.135626 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-13 23:03:17.135703 | orchestrator | + [[ true == \t\r\u\e ]] 2025-05-13 23:03:17.135741 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-05-13 23:03:17.873587 | orchestrator | ++ semver latest 9.0.0 2025-05-13 23:03:17.934117 | orchestrator | + [[ -1 -ge 0 ]] 2025-05-13 23:03:17.934212 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-05-13 23:03:17.934228 | orchestrator | + wait_for_container_healthy 60 netbox-netbox-1 2025-05-13 23:03:17.934242 | orchestrator | + local max_attempts=60 2025-05-13 23:03:17.934253 | orchestrator | + local name=netbox-netbox-1 2025-05-13 23:03:17.934264 | orchestrator | + local attempt_num=1 2025-05-13 23:03:17.934627 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' netbox-netbox-1 2025-05-13 23:03:17.981073 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-13 23:03:17.981159 | orchestrator | + /opt/configuration/scripts/bootstrap/000-netbox.sh 2025-05-13 23:03:17.988599 | orchestrator | + set -e 2025-05-13 23:03:17.988637 | orchestrator | + osism manage netbox --parallel 4 2025-05-13 23:03:19.949270 | orchestrator | 2025-05-13 23:03:19 | INFO  | It takes a moment until task 68749700-7a5d-4224-8b6d-48cd0cb70ed7 (netbox-manager) has been started and output is visible here. 2025-05-13 23:03:22.475490 | orchestrator | 2025-05-13 23:03:22 | INFO  | Wait for NetBox service 2025-05-13 23:03:24.533926 | orchestrator | 2025-05-13 23:03:24.534655 | orchestrator | PLAY [Wait for NetBox service] ************************************************* 2025-05-13 23:03:24.609081 | orchestrator | 2025-05-13 23:03:24.609436 | orchestrator | TASK [Wait for NetBox service REST API] **************************************** 2025-05-13 23:03:25.736193 | orchestrator | ok: [localhost] 2025-05-13 23:03:25.736543 | orchestrator | 2025-05-13 23:03:25.737131 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 23:03:25.737997 | orchestrator | 2025-05-13 23:03:25 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-13 23:03:25.738725 | orchestrator | 2025-05-13 23:03:25 | INFO  | Please wait and do not abort execution. 2025-05-13 23:03:25.738943 | orchestrator | localhost : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 23:03:26.432666 | orchestrator | 2025-05-13 23:03:26 | INFO  | Manage devicetypes 2025-05-13 23:03:29.680173 | orchestrator | 2025-05-13 23:03:29 | INFO  | Manage moduletypes 2025-05-13 23:03:29.877834 | orchestrator | 2025-05-13 23:03:29 | INFO  | Manage resources 2025-05-13 23:03:29.891301 | orchestrator | 2025-05-13 23:03:29 | INFO  | Handle file /netbox/resources/100-initialise.yml 2025-05-13 23:03:30.920317 | orchestrator | IGNORE_SSL_ERRORS is True, catching exception and disabling SSL verification. 2025-05-13 23:03:30.929995 | orchestrator | Manufacturer queued for addition: Arista 2025-05-13 23:03:30.934231 | orchestrator | Manufacturer queued for addition: Other 2025-05-13 23:03:30.935172 | orchestrator | Manufacturer Created: Arista - 2 2025-05-13 23:03:30.936067 | orchestrator | Manufacturer Created: Other - 3 2025-05-13 23:03:30.937247 | orchestrator | Device Type Created: Arista - DCS-7050TX3-48C8 - 2 2025-05-13 23:03:30.938246 | orchestrator | Interface Template Created: Ethernet1 - 10GBASE-T (10GE) - 2 - 1 2025-05-13 23:03:30.939399 | orchestrator | Interface Template Created: Ethernet2 - 10GBASE-T (10GE) - 2 - 2 2025-05-13 23:03:30.940121 | orchestrator | Interface Template Created: Ethernet3 - 10GBASE-T (10GE) - 2 - 3 2025-05-13 23:03:30.940854 | orchestrator | Interface Template Created: Ethernet4 - 10GBASE-T (10GE) - 2 - 4 2025-05-13 23:03:30.941869 | orchestrator | Interface Template Created: Ethernet5 - 10GBASE-T (10GE) - 2 - 5 2025-05-13 23:03:30.943227 | orchestrator | Interface Template Created: Ethernet6 - 10GBASE-T (10GE) - 2 - 6 2025-05-13 23:03:30.943257 | orchestrator | Interface Template Created: Ethernet7 - 10GBASE-T (10GE) - 2 - 7 2025-05-13 23:03:30.943998 | orchestrator | Interface Template Created: Ethernet8 - 10GBASE-T (10GE) - 2 - 8 2025-05-13 23:03:30.944548 | orchestrator | Interface Template Created: Ethernet9 - 10GBASE-T (10GE) - 2 - 9 2025-05-13 23:03:30.945403 | orchestrator | Interface Template Created: Ethernet10 - 10GBASE-T (10GE) - 2 - 10 2025-05-13 23:03:30.945800 | orchestrator | Interface Template Created: Ethernet11 - 10GBASE-T (10GE) - 2 - 11 2025-05-13 23:03:30.946742 | orchestrator | Interface Template Created: Ethernet12 - 10GBASE-T (10GE) - 2 - 12 2025-05-13 23:03:30.947364 | orchestrator | Interface Template Created: Ethernet13 - 10GBASE-T (10GE) - 2 - 13 2025-05-13 23:03:30.947704 | orchestrator | Interface Template Created: Ethernet14 - 10GBASE-T (10GE) - 2 - 14 2025-05-13 23:03:30.948507 | orchestrator | Interface Template Created: Ethernet15 - 10GBASE-T (10GE) - 2 - 15 2025-05-13 23:03:30.949029 | orchestrator | Interface Template Created: Ethernet16 - 10GBASE-T (10GE) - 2 - 16 2025-05-13 23:03:30.949605 | orchestrator | Interface Template Created: Ethernet17 - 10GBASE-T (10GE) - 2 - 17 2025-05-13 23:03:30.949982 | orchestrator | Interface Template Created: Ethernet18 - 10GBASE-T (10GE) - 2 - 18 2025-05-13 23:03:30.950728 | orchestrator | Interface Template Created: Ethernet19 - 10GBASE-T (10GE) - 2 - 19 2025-05-13 23:03:30.951195 | orchestrator | Interface Template Created: Ethernet20 - 10GBASE-T (10GE) - 2 - 20 2025-05-13 23:03:30.951790 | orchestrator | Interface Template Created: Ethernet21 - 10GBASE-T (10GE) - 2 - 21 2025-05-13 23:03:30.952715 | orchestrator | Interface Template Created: Ethernet22 - 10GBASE-T (10GE) - 2 - 22 2025-05-13 23:03:30.953102 | orchestrator | Interface Template Created: Ethernet23 - 10GBASE-T (10GE) - 2 - 23 2025-05-13 23:03:30.954010 | orchestrator | Interface Template Created: Ethernet24 - 10GBASE-T (10GE) - 2 - 24 2025-05-13 23:03:30.954088 | orchestrator | Interface Template Created: Ethernet25 - 10GBASE-T (10GE) - 2 - 25 2025-05-13 23:03:30.954382 | orchestrator | Interface Template Created: Ethernet26 - 10GBASE-T (10GE) - 2 - 26 2025-05-13 23:03:30.955109 | orchestrator | Interface Template Created: Ethernet27 - 10GBASE-T (10GE) - 2 - 27 2025-05-13 23:03:30.955774 | orchestrator | Interface Template Created: Ethernet28 - 10GBASE-T (10GE) - 2 - 28 2025-05-13 23:03:30.955977 | orchestrator | Interface Template Created: Ethernet29 - 10GBASE-T (10GE) - 2 - 29 2025-05-13 23:03:30.956598 | orchestrator | Interface Template Created: Ethernet30 - 10GBASE-T (10GE) - 2 - 30 2025-05-13 23:03:30.956952 | orchestrator | Interface Template Created: Ethernet31 - 10GBASE-T (10GE) - 2 - 31 2025-05-13 23:03:30.957542 | orchestrator | Interface Template Created: Ethernet32 - 10GBASE-T (10GE) - 2 - 32 2025-05-13 23:03:30.958172 | orchestrator | Interface Template Created: Ethernet33 - 10GBASE-T (10GE) - 2 - 33 2025-05-13 23:03:30.958687 | orchestrator | Interface Template Created: Ethernet34 - 10GBASE-T (10GE) - 2 - 34 2025-05-13 23:03:30.959050 | orchestrator | Interface Template Created: Ethernet35 - 10GBASE-T (10GE) - 2 - 35 2025-05-13 23:03:30.959552 | orchestrator | Interface Template Created: Ethernet36 - 10GBASE-T (10GE) - 2 - 36 2025-05-13 23:03:30.960040 | orchestrator | Interface Template Created: Ethernet37 - 10GBASE-T (10GE) - 2 - 37 2025-05-13 23:03:30.960947 | orchestrator | Interface Template Created: Ethernet38 - 10GBASE-T (10GE) - 2 - 38 2025-05-13 23:03:30.961167 | orchestrator | Interface Template Created: Ethernet39 - 10GBASE-T (10GE) - 2 - 39 2025-05-13 23:03:30.961610 | orchestrator | Interface Template Created: Ethernet40 - 10GBASE-T (10GE) - 2 - 40 2025-05-13 23:03:30.962132 | orchestrator | Interface Template Created: Ethernet41 - 10GBASE-T (10GE) - 2 - 41 2025-05-13 23:03:30.962682 | orchestrator | Interface Template Created: Ethernet42 - 10GBASE-T (10GE) - 2 - 42 2025-05-13 23:03:30.962705 | orchestrator | Interface Template Created: Ethernet43 - 10GBASE-T (10GE) - 2 - 43 2025-05-13 23:03:30.963119 | orchestrator | Interface Template Created: Ethernet44 - 10GBASE-T (10GE) - 2 - 44 2025-05-13 23:03:30.963548 | orchestrator | Interface Template Created: Ethernet45 - 10GBASE-T (10GE) - 2 - 45 2025-05-13 23:03:30.963764 | orchestrator | Interface Template Created: Ethernet46 - 10GBASE-T (10GE) - 2 - 46 2025-05-13 23:03:30.964311 | orchestrator | Interface Template Created: Ethernet47 - 10GBASE-T (10GE) - 2 - 47 2025-05-13 23:03:30.964726 | orchestrator | Interface Template Created: Ethernet48 - 10GBASE-T (10GE) - 2 - 48 2025-05-13 23:03:30.964987 | orchestrator | Interface Template Created: Ethernet49/1 - QSFP28 (100GE) - 2 - 49 2025-05-13 23:03:30.965486 | orchestrator | Interface Template Created: Ethernet50/1 - QSFP28 (100GE) - 2 - 50 2025-05-13 23:03:30.965937 | orchestrator | Interface Template Created: Ethernet51/1 - QSFP28 (100GE) - 2 - 51 2025-05-13 23:03:30.966264 | orchestrator | Interface Template Created: Ethernet52/1 - QSFP28 (100GE) - 2 - 52 2025-05-13 23:03:30.966698 | orchestrator | Interface Template Created: Ethernet53/1 - QSFP28 (100GE) - 2 - 53 2025-05-13 23:03:30.967069 | orchestrator | Interface Template Created: Ethernet54/1 - QSFP28 (100GE) - 2 - 54 2025-05-13 23:03:30.968096 | orchestrator | Interface Template Created: Ethernet55/1 - QSFP28 (100GE) - 2 - 55 2025-05-13 23:03:30.968272 | orchestrator | Interface Template Created: Ethernet56/1 - QSFP28 (100GE) - 2 - 56 2025-05-13 23:03:30.968705 | orchestrator | Interface Template Created: Management1 - 1000BASE-T (1GE) - 2 - 57 2025-05-13 23:03:30.968878 | orchestrator | Power Port Template Created: PS1 - C14 - 2 - 1 2025-05-13 23:03:30.969332 | orchestrator | Power Port Template Created: PS2 - C14 - 2 - 2 2025-05-13 23:03:30.969641 | orchestrator | Console Port Template Created: Console - RJ-45 - 2 - 1 2025-05-13 23:03:30.970130 | orchestrator | Device Type Created: Other - Baremetal-Device - 3 2025-05-13 23:03:30.970534 | orchestrator | Interface Template Created: Ethernet1 - 10GBASE-T (10GE) - 3 - 58 2025-05-13 23:03:30.970924 | orchestrator | Interface Template Created: Ethernet2 - 10GBASE-T (10GE) - 3 - 59 2025-05-13 23:03:30.971265 | orchestrator | Power Port Template Created: PS1 - C14 - 3 - 3 2025-05-13 23:03:30.971825 | orchestrator | Device Type Created: Other - Manager - 4 2025-05-13 23:03:30.971975 | orchestrator | Interface Template Created: Ethernet0 - 1000BASE-T (1GE) - 4 - 60 2025-05-13 23:03:30.972401 | orchestrator | Interface Template Created: Ethernet1 - 10GBASE-T (10GE) - 4 - 61 2025-05-13 23:03:30.972985 | orchestrator | Interface Template Created: Ethernet2 - 10GBASE-T (10GE) - 4 - 62 2025-05-13 23:03:30.973299 | orchestrator | Interface Template Created: Ethernet3 - 10GBASE-T (10GE) - 4 - 63 2025-05-13 23:03:30.973421 | orchestrator | Power Port Template Created: PS1 - C14 - 4 - 4 2025-05-13 23:03:30.973736 | orchestrator | Device Type Created: Other - Node - 5 2025-05-13 23:03:30.974174 | orchestrator | Interface Template Created: Ethernet0 - 1000BASE-T (1GE) - 5 - 64 2025-05-13 23:03:30.974536 | orchestrator | Interface Template Created: Ethernet1 - 10GBASE-T (10GE) - 5 - 65 2025-05-13 23:03:30.974781 | orchestrator | Interface Template Created: Ethernet2 - 10GBASE-T (10GE) - 5 - 66 2025-05-13 23:03:30.976738 | orchestrator | Interface Template Created: Ethernet3 - 10GBASE-T (10GE) - 5 - 67 2025-05-13 23:03:30.976794 | orchestrator | Power Port Template Created: PS1 - C14 - 5 - 5 2025-05-13 23:03:30.976815 | orchestrator | Device Type Created: Other - Baremetal-Housing - 6 2025-05-13 23:03:30.976836 | orchestrator | Interface Template Created: Ethernet0 - 1000BASE-T (1GE) - 6 - 68 2025-05-13 23:03:30.976855 | orchestrator | Interface Template Created: Ethernet1 - 10GBASE-T (10GE) - 6 - 69 2025-05-13 23:03:30.976874 | orchestrator | Interface Template Created: Ethernet2 - 10GBASE-T (10GE) - 6 - 70 2025-05-13 23:03:30.976886 | orchestrator | Interface Template Created: Ethernet3 - 10GBASE-T (10GE) - 6 - 71 2025-05-13 23:03:30.977022 | orchestrator | Power Port Template Created: PS1 - C14 - 6 - 6 2025-05-13 23:03:30.977039 | orchestrator | Manufacturer queued for addition: .gitkeep 2025-05-13 23:03:30.977501 | orchestrator | Manufacturer Created: .gitkeep - 4 2025-05-13 23:03:30.977703 | orchestrator | 2025-05-13 23:03:30.977966 | orchestrator | PLAY [Manage NetBox resources defined in 100-initialise.yml] ******************* 2025-05-13 23:03:30.978297 | orchestrator | 2025-05-13 23:03:30.978504 | orchestrator | TASK [Manage NetBox resource Testbed of type tenant] *************************** 2025-05-13 23:03:32.185930 | orchestrator | changed: [localhost] 2025-05-13 23:03:32.186120 | orchestrator | 2025-05-13 23:03:32.186150 | orchestrator | TASK [Manage NetBox resource Discworld of type site] *************************** 2025-05-13 23:03:33.448358 | orchestrator | changed: [localhost] 2025-05-13 23:03:33.453205 | orchestrator | 2025-05-13 23:03:33.453727 | orchestrator | TASK [Manage NetBox resource Ankh-Morpork of type location] ******************** 2025-05-13 23:03:34.738294 | orchestrator | changed: [localhost] 2025-05-13 23:03:34.741476 | orchestrator | 2025-05-13 23:03:34.742331 | orchestrator | TASK [Manage NetBox resource OOB Testbed of type vlan] ************************* 2025-05-13 23:03:36.168228 | orchestrator | changed: [localhost] 2025-05-13 23:03:36.173029 | orchestrator | 2025-05-13 23:03:36.173074 | orchestrator | TASK [Manage NetBox resource of type prefix] *********************************** 2025-05-13 23:03:37.839355 | orchestrator | changed: [localhost] 2025-05-13 23:03:37.844036 | orchestrator | 2025-05-13 23:03:37.845067 | orchestrator | TASK [Manage NetBox resource of type prefix] *********************************** 2025-05-13 23:03:39.108761 | orchestrator | changed: [localhost] 2025-05-13 23:03:39.108876 | orchestrator | 2025-05-13 23:03:39.108893 | orchestrator | TASK [Manage NetBox resource of type prefix] *********************************** 2025-05-13 23:03:40.392669 | orchestrator | changed: [localhost] 2025-05-13 23:03:40.396639 | orchestrator | 2025-05-13 23:03:40.399638 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-05-13 23:03:41.684005 | orchestrator | changed: [localhost] 2025-05-13 23:03:41.684553 | orchestrator | 2025-05-13 23:03:41.684964 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-05-13 23:03:42.814981 | orchestrator | changed: [localhost] 2025-05-13 23:03:42.815711 | orchestrator | 2025-05-13 23:03:42.816125 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 23:03:42.816704 | orchestrator | 2025-05-13 23:03:42 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-13 23:03:42.817064 | orchestrator | 2025-05-13 23:03:42 | INFO  | Please wait and do not abort execution. 2025-05-13 23:03:42.817785 | orchestrator | localhost : ok=9 changed=9 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 23:03:43.008532 | orchestrator | 2025-05-13 23:03:43 | INFO  | Handle file /netbox/resources/200-rack-1000.yml 2025-05-13 23:03:44.005495 | orchestrator | 2025-05-13 23:03:44.005613 | orchestrator | PLAY [Manage NetBox resources defined in 200-rack-1000.yml] ******************** 2025-05-13 23:03:44.055689 | orchestrator | 2025-05-13 23:03:44.055919 | orchestrator | TASK [Manage NetBox resource 1000 of type rack] ******************************** 2025-05-13 23:03:45.630871 | orchestrator | changed: [localhost] 2025-05-13 23:03:45.634252 | orchestrator | 2025-05-13 23:03:45.635877 | orchestrator | TASK [Manage NetBox resource testbed-switch-0 of type device] ****************** 2025-05-13 23:03:52.286358 | orchestrator | changed: [localhost] 2025-05-13 23:03:52.286590 | orchestrator | 2025-05-13 23:03:52.286680 | orchestrator | TASK [Manage NetBox resource testbed-switch-1 of type device] ****************** 2025-05-13 23:03:58.247167 | orchestrator | changed: [localhost] 2025-05-13 23:03:58.250631 | orchestrator | 2025-05-13 23:03:58.251059 | orchestrator | TASK [Manage NetBox resource testbed-switch-2 of type device] ****************** 2025-05-13 23:04:04.005855 | orchestrator | changed: [localhost] 2025-05-13 23:04:04.007194 | orchestrator | 2025-05-13 23:04:04.008847 | orchestrator | TASK [Manage NetBox resource testbed-switch-oob of type device] **************** 2025-05-13 23:04:16.106432 | orchestrator | changed: [localhost] 2025-05-13 23:04:16.106540 | orchestrator | 2025-05-13 23:04:16.106557 | orchestrator | TASK [Manage NetBox resource testbed-manager of type device] ******************* 2025-05-13 23:04:18.363085 | orchestrator | changed: [localhost] 2025-05-13 23:04:18.364232 | orchestrator | 2025-05-13 23:04:18.364273 | orchestrator | TASK [Manage NetBox resource testbed-node-0 of type device] ******************** 2025-05-13 23:04:20.899986 | orchestrator | changed: [localhost] 2025-05-13 23:04:20.902352 | orchestrator | 2025-05-13 23:04:20.904633 | orchestrator | TASK [Manage NetBox resource testbed-node-1 of type device] ******************** 2025-05-13 23:04:23.775060 | orchestrator | changed: [localhost] 2025-05-13 23:04:23.775178 | orchestrator | 2025-05-13 23:04:23.775389 | orchestrator | TASK [Manage NetBox resource testbed-node-2 of type device] ******************** 2025-05-13 23:04:26.286267 | orchestrator | changed: [localhost] 2025-05-13 23:04:26.287104 | orchestrator | 2025-05-13 23:04:26.287754 | orchestrator | TASK [Manage NetBox resource testbed-node-3 of type device] ******************** 2025-05-13 23:04:29.042268 | orchestrator | changed: [localhost] 2025-05-13 23:04:29.043337 | orchestrator | 2025-05-13 23:04:29.043560 | orchestrator | TASK [Manage NetBox resource testbed-node-4 of type device] ******************** 2025-05-13 23:04:31.379500 | orchestrator | changed: [localhost] 2025-05-13 23:04:31.382492 | orchestrator | 2025-05-13 23:04:31.384431 | orchestrator | TASK [Manage NetBox resource testbed-node-5 of type device] ******************** 2025-05-13 23:04:33.653295 | orchestrator | changed: [localhost] 2025-05-13 23:04:33.653484 | orchestrator | 2025-05-13 23:04:33.654092 | orchestrator | TASK [Manage NetBox resource testbed-node-6 of type device] ******************** 2025-05-13 23:04:36.537790 | orchestrator | changed: [localhost] 2025-05-13 23:04:36.538078 | orchestrator | 2025-05-13 23:04:36.538469 | orchestrator | TASK [Manage NetBox resource testbed-node-7 of type device] ******************** 2025-05-13 23:04:39.104696 | orchestrator | changed: [localhost] 2025-05-13 23:04:39.110115 | orchestrator | 2025-05-13 23:04:39.110447 | orchestrator | TASK [Manage NetBox resource testbed-node-8 of type device] ******************** 2025-05-13 23:04:41.549873 | orchestrator | changed: [localhost] 2025-05-13 23:04:41.550347 | orchestrator | 2025-05-13 23:04:41.550847 | orchestrator | TASK [Manage NetBox resource testbed-node-9 of type device] ******************** 2025-05-13 23:04:44.077075 | orchestrator | changed: [localhost] 2025-05-13 23:04:44.077574 | orchestrator | 2025-05-13 23:04:44.078528 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 23:04:44.078801 | orchestrator | 2025-05-13 23:04:44 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-13 23:04:44.078906 | orchestrator | 2025-05-13 23:04:44 | INFO  | Please wait and do not abort execution. 2025-05-13 23:04:44.079655 | orchestrator | localhost : ok=16 changed=16 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 23:04:44.312954 | orchestrator | 2025-05-13 23:04:44 | INFO  | Handle file /netbox/resources/300-testbed-switch-0.yml 2025-05-13 23:04:44.325332 | orchestrator | 2025-05-13 23:04:44 | INFO  | Handle file /netbox/resources/300-testbed-node-9.yml 2025-05-13 23:04:44.332157 | orchestrator | 2025-05-13 23:04:44 | INFO  | Handle file /netbox/resources/300-testbed-node-1.yml 2025-05-13 23:04:44.334718 | orchestrator | 2025-05-13 23:04:44 | INFO  | Handle file /netbox/resources/300-testbed-node-3.yml 2025-05-13 23:04:45.572743 | orchestrator | 2025-05-13 23:04:45.572853 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-node-3.yml] *************** 2025-05-13 23:04:45.572870 | orchestrator | 2025-05-13 23:04:45.572945 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-switch-0.yml] ************* 2025-05-13 23:04:45.579061 | orchestrator | 2025-05-13 23:04:45.579227 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-node-9.yml] *************** 2025-05-13 23:04:45.612925 | orchestrator | 2025-05-13 23:04:45.613018 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-node-1.yml] *************** 2025-05-13 23:04:45.623612 | orchestrator | 2025-05-13 23:04:45.624965 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 23:04:45.625010 | orchestrator | 2025-05-13 23:04:45.625219 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 23:04:45.641073 | orchestrator | 2025-05-13 23:04:45.641152 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 23:04:45.670348 | orchestrator | 2025-05-13 23:04:45.671204 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 23:04:48.288754 | orchestrator | changed: [localhost] 2025-05-13 23:04:48.289334 | orchestrator | 2025-05-13 23:04:48.289648 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 23:04:48.500171 | orchestrator | changed: [localhost] 2025-05-13 23:04:48.508814 | orchestrator | 2025-05-13 23:04:48.508875 | orchestrator | TASK [Manage NetBox resource Management1 of type device_interface] ************* 2025-05-13 23:04:48.693760 | orchestrator | changed: [localhost] 2025-05-13 23:04:48.697450 | orchestrator | 2025-05-13 23:04:48.697921 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 23:04:48.813860 | orchestrator | changed: [localhost] 2025-05-13 23:04:48.818717 | orchestrator | 2025-05-13 23:04:48.818760 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 23:04:51.001926 | orchestrator | changed: [localhost] 2025-05-13 23:04:51.004759 | orchestrator | 2025-05-13 23:04:51.005033 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-05-13 23:04:51.195887 | orchestrator | changed: [localhost] 2025-05-13 23:04:51.196546 | orchestrator | 2025-05-13 23:04:51.196933 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 23:04:51.418089 | orchestrator | changed: [localhost] 2025-05-13 23:04:51.418558 | orchestrator | 2025-05-13 23:04:51.419087 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 23:04:53.020714 | orchestrator | changed: [localhost] 2025-05-13 23:04:53.021687 | orchestrator | 2025-05-13 23:04:53.023665 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 23:04:53.089537 | orchestrator | changed: [localhost] 2025-05-13 23:04:53.093788 | orchestrator | 2025-05-13 23:04:53.093915 | orchestrator | TASK [Manage NetBox resource testbed-switch-0 of type device] ****************** 2025-05-13 23:04:53.318957 | orchestrator | changed: [localhost] 2025-05-13 23:04:53.325760 | orchestrator | 2025-05-13 23:04:53.326074 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 23:04:55.133550 | orchestrator | changed: [localhost] 2025-05-13 23:04:55.151940 | orchestrator | 2025-05-13 23:04:55.152028 | orchestrator | TASK [Manage NetBox resource of type mac_address] ****************************** 2025-05-13 23:04:55.661881 | orchestrator | changed: [localhost] 2025-05-13 23:04:55.662629 | orchestrator | 2025-05-13 23:04:55.662978 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-05-13 23:04:56.460171 | orchestrator | changed: [localhost] 2025-05-13 23:04:56.462265 | orchestrator | 2025-05-13 23:04:56.462311 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-05-13 23:04:56.855856 | orchestrator | changed: [localhost] 2025-05-13 23:04:56.861111 | orchestrator | 2025-05-13 23:04:56.861628 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 23:04:56.861659 | orchestrator | 2025-05-13 23:04:56 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-13 23:04:56.861673 | orchestrator | 2025-05-13 23:04:56 | INFO  | Please wait and do not abort execution. 2025-05-13 23:04:56.861951 | orchestrator | localhost : ok=5 changed=5 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 23:04:57.129146 | orchestrator | 2025-05-13 23:04:57 | INFO  | Handle file /netbox/resources/300-testbed-node-6.yml 2025-05-13 23:04:57.619439 | orchestrator | changed: [localhost] 2025-05-13 23:04:57.619581 | orchestrator | 2025-05-13 23:04:57.619661 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-05-13 23:04:57.829997 | orchestrator | changed: [localhost] 2025-05-13 23:04:57.835511 | orchestrator | 2025-05-13 23:04:57.835544 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 23:04:58.311214 | orchestrator | 2025-05-13 23:04:58.313099 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-node-6.yml] *************** 2025-05-13 23:04:58.369155 | orchestrator | 2025-05-13 23:04:58.369537 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 23:04:58.441908 | orchestrator | changed: [localhost] 2025-05-13 23:04:58.446363 | orchestrator | 2025-05-13 23:04:58.446684 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-05-13 23:04:59.281724 | orchestrator | changed: [localhost] 2025-05-13 23:04:59.284836 | orchestrator | 2025-05-13 23:04:59.285527 | orchestrator | TASK [Manage NetBox resource of type mac_address] ****************************** 2025-05-13 23:05:00.144663 | orchestrator | changed: [localhost] 2025-05-13 23:05:00.149084 | orchestrator | 2025-05-13 23:05:00.149826 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 23:05:00.234443 | orchestrator | changed: [localhost] 2025-05-13 23:05:00.247330 | orchestrator | 2025-05-13 23:05:00.250955 | orchestrator | TASK [Manage NetBox resource of type mac_address] ****************************** 2025-05-13 23:05:01.383030 | orchestrator | changed: [localhost] 2025-05-13 23:05:01.390080 | orchestrator | 2025-05-13 23:05:01.390933 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 23:05:01.624079 | orchestrator | changed: [localhost] 2025-05-13 23:05:01.629815 | orchestrator | 2025-05-13 23:05:01.633620 | orchestrator | TASK [Manage NetBox resource of type mac_address] ****************************** 2025-05-13 23:05:02.240251 | orchestrator | changed: [localhost] 2025-05-13 23:05:02.245285 | orchestrator | 2025-05-13 23:05:02.246255 | orchestrator | TASK [Manage NetBox resource of type mac_address] ****************************** 2025-05-13 23:05:02.549895 | orchestrator | changed: [localhost] 2025-05-13 23:05:02.551764 | orchestrator | 2025-05-13 23:05:02.552042 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-05-13 23:05:03.202503 | orchestrator | changed: [localhost] 2025-05-13 23:05:03.210697 | orchestrator | 2025-05-13 23:05:03.210895 | orchestrator | TASK [Manage NetBox resource testbed-node-3 of type device] ******************** 2025-05-13 23:05:03.439930 | orchestrator | changed: [localhost] 2025-05-13 23:05:03.447590 | orchestrator | 2025-05-13 23:05:03.447659 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 23:05:03.761346 | orchestrator | changed: [localhost] 2025-05-13 23:05:03.761555 | orchestrator | 2025-05-13 23:05:03.763098 | orchestrator | TASK [Manage NetBox resource testbed-node-1 of type device] ******************** 2025-05-13 23:05:04.650337 | orchestrator | changed: [localhost] 2025-05-13 23:05:04.653543 | orchestrator | 2025-05-13 23:05:04.654236 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-05-13 23:05:04.925622 | orchestrator | changed: [localhost] 2025-05-13 23:05:04.930712 | orchestrator | 2025-05-13 23:05:04.931249 | orchestrator | TASK [Manage NetBox resource Ethernet0 of type device_interface] *************** 2025-05-13 23:05:05.444318 | orchestrator | changed: [localhost] 2025-05-13 23:05:05.445754 | orchestrator | 2025-05-13 23:05:05.445878 | orchestrator | TASK [Manage NetBox resource Ethernet0 of type device_interface] *************** 2025-05-13 23:05:05.765766 | orchestrator | changed: [localhost] 2025-05-13 23:05:05.772838 | orchestrator | 2025-05-13 23:05:05.773797 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 23:05:06.085930 | orchestrator | changed: [localhost] 2025-05-13 23:05:06.089968 | orchestrator | 2025-05-13 23:05:06.090559 | orchestrator | TASK [Manage NetBox resource of type mac_address] ****************************** 2025-05-13 23:05:07.568320 | orchestrator | changed: [localhost] 2025-05-13 23:05:07.568618 | orchestrator | 2025-05-13 23:05:07.568741 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 23:05:07.569169 | orchestrator | 2025-05-13 23:05:07 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-13 23:05:07.569684 | orchestrator | 2025-05-13 23:05:07 | INFO  | Please wait and do not abort execution. 2025-05-13 23:05:07.573430 | orchestrator | localhost : ok=10 changed=10 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 23:05:07.731216 | orchestrator | changed: [localhost] 2025-05-13 23:05:07.733883 | orchestrator | 2025-05-13 23:05:07.734283 | orchestrator | TASK [Manage NetBox resource of type mac_address] ****************************** 2025-05-13 23:05:07.813636 | orchestrator | 2025-05-13 23:05:07 | INFO  | Handle file /netbox/resources/300-testbed-switch-2.yml 2025-05-13 23:05:07.863919 | orchestrator | changed: [localhost] 2025-05-13 23:05:07.868680 | orchestrator | 2025-05-13 23:05:07.868929 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 23:05:07.869232 | orchestrator | 2025-05-13 23:05:07 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-13 23:05:07.869255 | orchestrator | 2025-05-13 23:05:07 | INFO  | Please wait and do not abort execution. 2025-05-13 23:05:07.869655 | orchestrator | localhost : ok=10 changed=10 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 23:05:07.978267 | orchestrator | changed: [localhost] 2025-05-13 23:05:07.979645 | orchestrator | 2025-05-13 23:05:07.979683 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-05-13 23:05:08.134643 | orchestrator | 2025-05-13 23:05:08 | INFO  | Handle file /netbox/resources/300-testbed-node-5.yml 2025-05-13 23:05:08.868821 | orchestrator | 2025-05-13 23:05:08.868949 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-switch-2.yml] ************* 2025-05-13 23:05:08.918248 | orchestrator | 2025-05-13 23:05:08.918465 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 23:05:09.125097 | orchestrator | changed: [localhost] 2025-05-13 23:05:09.127816 | orchestrator | 2025-05-13 23:05:09.131484 | orchestrator | TASK [Manage NetBox resource testbed-node-9 of type device] ******************** 2025-05-13 23:05:09.258838 | orchestrator | 2025-05-13 23:05:09.258924 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-node-5.yml] *************** 2025-05-13 23:05:09.318939 | orchestrator | 2025-05-13 23:05:09.320053 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 23:05:09.832130 | orchestrator | changed: [localhost] 2025-05-13 23:05:09.836052 | orchestrator | 2025-05-13 23:05:09.836108 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-05-13 23:05:11.014097 | orchestrator | changed: [localhost] 2025-05-13 23:05:11.015811 | orchestrator | 2025-05-13 23:05:11.016292 | orchestrator | TASK [Manage NetBox resource Ethernet0 of type device_interface] *************** 2025-05-13 23:05:11.450692 | orchestrator | changed: [localhost] 2025-05-13 23:05:11.453742 | orchestrator | 2025-05-13 23:05:11.453804 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 23:05:11.764869 | orchestrator | changed: [localhost] 2025-05-13 23:05:11.770954 | orchestrator | 2025-05-13 23:05:11.772587 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 23:05:11.939960 | orchestrator | changed: [localhost] 2025-05-13 23:05:11.943784 | orchestrator | 2025-05-13 23:05:11.943969 | orchestrator | TASK [Manage NetBox resource of type mac_address] ****************************** 2025-05-13 23:05:13.559722 | orchestrator | changed: [localhost] 2025-05-13 23:05:13.564916 | orchestrator | 2025-05-13 23:05:13.564981 | orchestrator | TASK [Manage NetBox resource of type mac_address] ****************************** 2025-05-13 23:05:14.022287 | orchestrator | changed: [localhost] 2025-05-13 23:05:14.023993 | orchestrator | 2025-05-13 23:05:14.024073 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 23:05:14.024414 | orchestrator | 2025-05-13 23:05:14 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-13 23:05:14.024452 | orchestrator | 2025-05-13 23:05:14 | INFO  | Please wait and do not abort execution. 2025-05-13 23:05:14.026243 | orchestrator | localhost : ok=10 changed=10 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 23:05:14.127844 | orchestrator | changed: [localhost] 2025-05-13 23:05:14.129256 | orchestrator | 2025-05-13 23:05:14.129716 | orchestrator | TASK [Manage NetBox resource Management1 of type device_interface] ************* 2025-05-13 23:05:14.292343 | orchestrator | 2025-05-13 23:05:14 | INFO  | Handle file /netbox/resources/300-testbed-node-8.yml 2025-05-13 23:05:14.447549 | orchestrator | changed: [localhost] 2025-05-13 23:05:14.455818 | orchestrator | 2025-05-13 23:05:14.455911 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 23:05:14.935788 | orchestrator | changed: [localhost] 2025-05-13 23:05:14.937395 | orchestrator | 2025-05-13 23:05:14.938132 | orchestrator | TASK [Manage NetBox resource testbed-node-6 of type device] ******************** 2025-05-13 23:05:15.458259 | orchestrator | 2025-05-13 23:05:15.460850 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-node-8.yml] *************** 2025-05-13 23:05:15.519411 | orchestrator | 2025-05-13 23:05:15.520107 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 23:05:16.346785 | orchestrator | changed: [localhost] 2025-05-13 23:05:16.351223 | orchestrator | 2025-05-13 23:05:16.355233 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-05-13 23:05:16.648275 | orchestrator | changed: [localhost] 2025-05-13 23:05:16.654865 | orchestrator | 2025-05-13 23:05:16.656098 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 23:05:17.000749 | orchestrator | changed: [localhost] 2025-05-13 23:05:17.008634 | orchestrator | 2025-05-13 23:05:17.009534 | orchestrator | TASK [Manage NetBox resource Ethernet0 of type device_interface] *************** 2025-05-13 23:05:17.957288 | orchestrator | changed: [localhost] 2025-05-13 23:05:17.963292 | orchestrator | 2025-05-13 23:05:17.963546 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 23:05:18.403671 | orchestrator | changed: [localhost] 2025-05-13 23:05:18.403803 | orchestrator | 2025-05-13 23:05:18.406014 | orchestrator | TASK [Manage NetBox resource testbed-switch-2 of type device] ****************** 2025-05-13 23:05:18.881176 | orchestrator | changed: [localhost] 2025-05-13 23:05:18.887017 | orchestrator | 2025-05-13 23:05:18.888303 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-05-13 23:05:19.617187 | orchestrator | changed: [localhost] 2025-05-13 23:05:19.617297 | orchestrator | 2025-05-13 23:05:19.619007 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 23:05:19.619093 | orchestrator | 2025-05-13 23:05:19 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-13 23:05:19.619118 | orchestrator | 2025-05-13 23:05:19 | INFO  | Please wait and do not abort execution. 2025-05-13 23:05:19.619246 | orchestrator | localhost : ok=10 changed=10 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 23:05:19.869410 | orchestrator | 2025-05-13 23:05:19 | INFO  | Handle file /netbox/resources/300-testbed-node-0.yml 2025-05-13 23:05:20.257214 | orchestrator | changed: [localhost] 2025-05-13 23:05:20.261673 | orchestrator | 2025-05-13 23:05:20.265759 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 23:05:20.775593 | orchestrator | changed: [localhost] 2025-05-13 23:05:20.776048 | orchestrator | 2025-05-13 23:05:20.776550 | orchestrator | TASK [Manage NetBox resource of type mac_address] ****************************** 2025-05-13 23:05:21.062155 | orchestrator | 2025-05-13 23:05:21.062709 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-node-0.yml] *************** 2025-05-13 23:05:21.119779 | orchestrator | 2025-05-13 23:05:21.119865 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 23:05:21.307104 | orchestrator | changed: [localhost] 2025-05-13 23:05:21.311769 | orchestrator | 2025-05-13 23:05:21.312786 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-05-13 23:05:22.371658 | orchestrator | changed: [localhost] 2025-05-13 23:05:22.375500 | orchestrator | 2025-05-13 23:05:22.376452 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 23:05:22.585867 | orchestrator | changed: [localhost] 2025-05-13 23:05:22.586461 | orchestrator | 2025-05-13 23:05:22.589450 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 23:05:22.589493 | orchestrator | 2025-05-13 23:05:22 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-13 23:05:22.589506 | orchestrator | 2025-05-13 23:05:22 | INFO  | Please wait and do not abort execution. 2025-05-13 23:05:22.592245 | orchestrator | localhost : ok=6 changed=6 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 23:05:22.866161 | orchestrator | 2025-05-13 23:05:22 | INFO  | Handle file /netbox/resources/300-testbed-manager.yml 2025-05-13 23:05:23.045936 | orchestrator | changed: [localhost] 2025-05-13 23:05:23.061135 | orchestrator | 2025-05-13 23:05:23.061388 | orchestrator | TASK [Manage NetBox resource of type mac_address] ****************************** 2025-05-13 23:05:23.426326 | orchestrator | changed: [localhost] 2025-05-13 23:05:23.428036 | orchestrator | 2025-05-13 23:05:23.428573 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 23:05:24.298086 | orchestrator | 2025-05-13 23:05:24.299233 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-manager.yml] ************** 2025-05-13 23:05:24.356431 | orchestrator | 2025-05-13 23:05:24.356950 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 23:05:25.196455 | orchestrator | changed: [localhost] 2025-05-13 23:05:25.198763 | orchestrator | 2025-05-13 23:05:25.199052 | orchestrator | TASK [Manage NetBox resource of type mac_address] ****************************** 2025-05-13 23:05:25.249209 | orchestrator | changed: [localhost] 2025-05-13 23:05:25.251650 | orchestrator | 2025-05-13 23:05:25.252425 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-05-13 23:05:26.156132 | orchestrator | changed: [localhost] 2025-05-13 23:05:26.157500 | orchestrator | 2025-05-13 23:05:26.157957 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 23:05:26.790703 | orchestrator | changed: [localhost] 2025-05-13 23:05:26.804490 | orchestrator | 2025-05-13 23:05:26.804777 | orchestrator | TASK [Manage NetBox resource testbed-node-5 of type device] ******************** 2025-05-13 23:05:27.130417 | orchestrator | changed: [localhost] 2025-05-13 23:05:27.136829 | orchestrator | 2025-05-13 23:05:27.137388 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-05-13 23:05:27.636805 | orchestrator | changed: [localhost] 2025-05-13 23:05:27.640961 | orchestrator | 2025-05-13 23:05:27.641009 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 23:05:28.362272 | orchestrator | changed: [localhost] 2025-05-13 23:05:28.368515 | orchestrator | 2025-05-13 23:05:28.368764 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 23:05:28.685775 | orchestrator | changed: [localhost] 2025-05-13 23:05:28.686508 | orchestrator | 2025-05-13 23:05:28.686780 | orchestrator | TASK [Manage NetBox resource Ethernet0 of type device_interface] *************** 2025-05-13 23:05:28.854853 | orchestrator | changed: [localhost] 2025-05-13 23:05:28.860329 | orchestrator | 2025-05-13 23:05:28.860466 | orchestrator | TASK [Manage NetBox resource of type mac_address] ****************************** 2025-05-13 23:05:30.358202 | orchestrator | changed: [localhost] 2025-05-13 23:05:30.358328 | orchestrator | 2025-05-13 23:05:30.359735 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 23:05:30.694262 | orchestrator | changed: [localhost] 2025-05-13 23:05:30.695640 | orchestrator | 2025-05-13 23:05:30.696702 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-05-13 23:05:30.840040 | orchestrator | changed: [localhost] 2025-05-13 23:05:30.843437 | orchestrator | 2025-05-13 23:05:30.843551 | orchestrator | TASK [Manage NetBox resource of type mac_address] ****************************** 2025-05-13 23:05:31.586734 | orchestrator | changed: [localhost] 2025-05-13 23:05:31.590726 | orchestrator | 2025-05-13 23:05:31.590795 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 23:05:31.590837 | orchestrator | 2025-05-13 23:05:31 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-13 23:05:31.590851 | orchestrator | 2025-05-13 23:05:31 | INFO  | Please wait and do not abort execution. 2025-05-13 23:05:31.591006 | orchestrator | localhost : ok=10 changed=10 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 23:05:31.808413 | orchestrator | 2025-05-13 23:05:31 | INFO  | Handle file /netbox/resources/300-testbed-node-4.yml 2025-05-13 23:05:32.393622 | orchestrator | changed: [localhost] 2025-05-13 23:05:32.395960 | orchestrator | 2025-05-13 23:05:32.396024 | orchestrator | TASK [Manage NetBox resource testbed-node-8 of type device] ******************** 2025-05-13 23:05:32.531442 | orchestrator | changed: [localhost] 2025-05-13 23:05:32.531530 | orchestrator | 2025-05-13 23:05:32.531778 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 23:05:32.561269 | orchestrator | changed: [localhost] 2025-05-13 23:05:32.566291 | orchestrator | 2025-05-13 23:05:32.566429 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-05-13 23:05:32.930805 | orchestrator | 2025-05-13 23:05:32.930913 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-node-4.yml] *************** 2025-05-13 23:05:32.983046 | orchestrator | 2025-05-13 23:05:32.983787 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 23:05:34.101715 | orchestrator | changed: [localhost] 2025-05-13 23:05:34.109938 | orchestrator | 2025-05-13 23:05:34.110108 | orchestrator | TASK [Manage NetBox resource Ethernet0 of type device_interface] *************** 2025-05-13 23:05:34.110397 | orchestrator | changed: [localhost] 2025-05-13 23:05:34.117782 | orchestrator | 2025-05-13 23:05:34.117919 | orchestrator | TASK [Manage NetBox resource testbed-node-0 of type device] ******************** 2025-05-13 23:05:34.874787 | orchestrator | changed: [localhost] 2025-05-13 23:05:34.876407 | orchestrator | 2025-05-13 23:05:34.876994 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-05-13 23:05:35.276483 | orchestrator | changed: [localhost] 2025-05-13 23:05:35.298869 | orchestrator | 2025-05-13 23:05:35.298972 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 23:05:36.054835 | orchestrator | changed: [localhost] 2025-05-13 23:05:36.055453 | orchestrator | 2025-05-13 23:05:36.056019 | orchestrator | TASK [Manage NetBox resource Ethernet0 of type device_interface] *************** 2025-05-13 23:05:36.710975 | orchestrator | changed: [localhost] 2025-05-13 23:05:36.713979 | orchestrator | 2025-05-13 23:05:36.715631 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 23:05:36.715700 | orchestrator | 2025-05-13 23:05:36 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-13 23:05:36.715716 | orchestrator | 2025-05-13 23:05:36 | INFO  | Please wait and do not abort execution. 2025-05-13 23:05:36.715804 | orchestrator | localhost : ok=10 changed=10 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 23:05:36.782180 | orchestrator | changed: [localhost] 2025-05-13 23:05:36.793040 | orchestrator | 2025-05-13 23:05:36.793234 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-05-13 23:05:36.961318 | orchestrator | 2025-05-13 23:05:36 | INFO  | Handle file /netbox/resources/300-testbed-node-7.yml 2025-05-13 23:05:37.601602 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "msg": "Could not resolve id of primary_mac_address: 52:8F:1C:A3:D7:E9"} 2025-05-13 23:05:37.601709 | orchestrator | 2025-05-13 23:05:37.602080 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 23:05:37.602172 | orchestrator | 2025-05-13 23:05:37 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-13 23:05:37.602396 | orchestrator | 2025-05-13 23:05:37 | INFO  | Please wait and do not abort execution. 2025-05-13 23:05:37.605705 | orchestrator | localhost : ok=7 changed=7 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 2025-05-13 23:05:37.864776 | orchestrator | 2025-05-13 23:05:37 | INFO  | Handle file /netbox/resources/300-testbed-node-2.yml 2025-05-13 23:05:38.007747 | orchestrator | changed: [localhost] 2025-05-13 23:05:38.016178 | orchestrator | 2025-05-13 23:05:38.016229 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 23:05:38.137982 | orchestrator | 2025-05-13 23:05:38.138133 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-node-7.yml] *************** 2025-05-13 23:05:38.203701 | orchestrator | 2025-05-13 23:05:38.203797 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 23:05:38.359583 | orchestrator | changed: [localhost] 2025-05-13 23:05:38.363731 | orchestrator | 2025-05-13 23:05:38.365303 | orchestrator | TASK [Manage NetBox resource of type mac_address] ****************************** 2025-05-13 23:05:38.988931 | orchestrator | 2025-05-13 23:05:38.989133 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-node-2.yml] *************** 2025-05-13 23:05:39.046778 | orchestrator | 2025-05-13 23:05:39.047464 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 23:05:40.177272 | orchestrator | changed: [localhost] 2025-05-13 23:05:40.179732 | orchestrator | 2025-05-13 23:05:40.180451 | orchestrator | TASK [Manage NetBox resource of type mac_address] ****************************** 2025-05-13 23:05:40.469384 | orchestrator | changed: [localhost] 2025-05-13 23:05:40.471426 | orchestrator | 2025-05-13 23:05:40.472731 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 23:05:40.654623 | orchestrator | changed: [localhost] 2025-05-13 23:05:40.661280 | orchestrator | 2025-05-13 23:05:40.665593 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 23:05:41.631139 | orchestrator | changed: [localhost] 2025-05-13 23:05:41.631496 | orchestrator | 2025-05-13 23:05:41.634926 | orchestrator | TASK [Manage NetBox resource testbed-manager of type device] ******************* 2025-05-13 23:05:42.003461 | orchestrator | changed: [localhost] 2025-05-13 23:05:42.004094 | orchestrator | 2025-05-13 23:05:42.004493 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 23:05:42.973771 | orchestrator | changed: [localhost] 2025-05-13 23:05:42.979831 | orchestrator | 2025-05-13 23:05:42.981741 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-05-13 23:05:43.092319 | orchestrator | changed: [localhost] 2025-05-13 23:05:43.102980 | orchestrator | 2025-05-13 23:05:43.103040 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 23:05:43.458989 | orchestrator | changed: [localhost] 2025-05-13 23:05:43.468040 | orchestrator | 2025-05-13 23:05:43.468095 | orchestrator | TASK [Manage NetBox resource Ethernet0 of type device_interface] *************** 2025-05-13 23:05:43.921707 | orchestrator | changed: [localhost] 2025-05-13 23:05:43.922160 | orchestrator | 2025-05-13 23:05:43.923899 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 23:05:44.694957 | orchestrator | changed: [localhost] 2025-05-13 23:05:44.695094 | orchestrator | 2025-05-13 23:05:44.695377 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-05-13 23:05:45.009831 | orchestrator | changed: [localhost] 2025-05-13 23:05:45.019823 | orchestrator | 2025-05-13 23:05:45.020016 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 23:05:45.704248 | orchestrator | changed: [localhost] 2025-05-13 23:05:45.706714 | orchestrator | 2025-05-13 23:05:45.706769 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 23:05:45.706809 | orchestrator | 2025-05-13 23:05:45 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-13 23:05:45.706823 | orchestrator | 2025-05-13 23:05:45 | INFO  | Please wait and do not abort execution. 2025-05-13 23:05:45.706879 | orchestrator | localhost : ok=10 changed=10 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 23:05:45.899376 | orchestrator | changed: [localhost] 2025-05-13 23:05:45.906074 | orchestrator | 2025-05-13 23:05:45.906531 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 23:05:45.922399 | orchestrator | 2025-05-13 23:05:45 | INFO  | Handle file /netbox/resources/300-testbed-switch-1.yml 2025-05-13 23:05:46.188141 | orchestrator | changed: [localhost] 2025-05-13 23:05:46.188860 | orchestrator | 2025-05-13 23:05:46.188949 | orchestrator | TASK [Manage NetBox resource of type mac_address] ****************************** 2025-05-13 23:05:47.017778 | orchestrator | changed: [localhost] 2025-05-13 23:05:47.023739 | orchestrator | 2025-05-13 23:05:47.025035 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-05-13 23:05:47.103021 | orchestrator | 2025-05-13 23:05:47.103145 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-switch-1.yml] ************* 2025-05-13 23:05:47.170001 | orchestrator | 2025-05-13 23:05:47.171989 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 23:05:48.167006 | orchestrator | changed: [localhost] 2025-05-13 23:05:48.175601 | orchestrator | 2025-05-13 23:05:48.175765 | orchestrator | TASK [Manage NetBox resource of type mac_address] ****************************** 2025-05-13 23:05:48.515719 | orchestrator | changed: [localhost] 2025-05-13 23:05:48.525183 | orchestrator | 2025-05-13 23:05:48.526740 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-05-13 23:05:48.845044 | orchestrator | changed: [localhost] 2025-05-13 23:05:48.848456 | orchestrator | 2025-05-13 23:05:48.848908 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-05-13 23:05:49.593588 | orchestrator | changed: [localhost] 2025-05-13 23:05:49.602864 | orchestrator | 2025-05-13 23:05:49.603786 | orchestrator | TASK [Manage NetBox resource Management1 of type device_interface] ************* 2025-05-13 23:05:50.171403 | orchestrator | changed: [localhost] 2025-05-13 23:05:50.183033 | orchestrator | 2025-05-13 23:05:50.186609 | orchestrator | TASK [Manage NetBox resource testbed-node-4 of type device] ******************** 2025-05-13 23:05:50.422074 | orchestrator | changed: [localhost] 2025-05-13 23:05:50.422184 | orchestrator | 2025-05-13 23:05:50.422201 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-05-13 23:05:51.108312 | orchestrator | changed: [localhost] 2025-05-13 23:05:51.114958 | orchestrator | 2025-05-13 23:05:51.115038 | orchestrator | TASK [Manage NetBox resource of type mac_address] ****************************** 2025-05-13 23:05:51.967761 | orchestrator | changed: [localhost] 2025-05-13 23:05:51.969084 | orchestrator | 2025-05-13 23:05:51.972821 | orchestrator | TASK [Manage NetBox resource of type mac_address] ****************************** 2025-05-13 23:05:52.091201 | orchestrator | changed: [localhost] 2025-05-13 23:05:52.101700 | orchestrator | 2025-05-13 23:05:52.103616 | orchestrator | TASK [Manage NetBox resource Ethernet0 of type device_interface] *************** 2025-05-13 23:05:52.317798 | orchestrator | changed: [localhost] 2025-05-13 23:05:52.325739 | orchestrator | 2025-05-13 23:05:52.326167 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-05-13 23:05:52.800994 | orchestrator | changed: [localhost] 2025-05-13 23:05:52.808103 | orchestrator | 2025-05-13 23:05:52.809625 | orchestrator | TASK [Manage NetBox resource of type mac_address] ****************************** 2025-05-13 23:05:53.751486 | orchestrator | changed: [localhost] 2025-05-13 23:05:53.756780 | orchestrator | 2025-05-13 23:05:53.756846 | orchestrator | TASK [Manage NetBox resource of type mac_address] ****************************** 2025-05-13 23:05:54.280432 | orchestrator | changed: [localhost] 2025-05-13 23:05:54.289978 | orchestrator | 2025-05-13 23:05:54.291548 | orchestrator | TASK [Manage NetBox resource testbed-switch-1 of type device] ****************** 2025-05-13 23:05:54.482543 | orchestrator | changed: [localhost] 2025-05-13 23:05:54.485691 | orchestrator | 2025-05-13 23:05:54.486652 | orchestrator | TASK [Manage NetBox resource testbed-node-7 of type device] ******************** 2025-05-13 23:05:54.809955 | orchestrator | changed: [localhost] 2025-05-13 23:05:54.810261 | orchestrator | 2025-05-13 23:05:54.810470 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 23:05:54.810581 | orchestrator | 2025-05-13 23:05:54 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-13 23:05:54.811710 | orchestrator | 2025-05-13 23:05:54 | INFO  | Please wait and do not abort execution. 2025-05-13 23:05:54.811818 | orchestrator | localhost : ok=10 changed=10 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 23:05:55.227786 | orchestrator | changed: [localhost] 2025-05-13 23:05:55.230624 | orchestrator | 2025-05-13 23:05:55.230972 | orchestrator | TASK [Manage NetBox resource testbed-node-2 of type device] ******************** 2025-05-13 23:05:55.953964 | orchestrator | changed: [localhost] 2025-05-13 23:05:55.955803 | orchestrator | 2025-05-13 23:05:55.956562 | orchestrator | TASK [Manage NetBox resource of type mac_address] ****************************** 2025-05-13 23:05:56.226567 | orchestrator | changed: [localhost] 2025-05-13 23:05:56.228913 | orchestrator | 2025-05-13 23:05:56.229212 | orchestrator | TASK [Manage NetBox resource Ethernet0 of type device_interface] *************** 2025-05-13 23:05:56.909651 | orchestrator | changed: [localhost] 2025-05-13 23:05:56.910529 | orchestrator | 2025-05-13 23:05:56.911567 | orchestrator | TASK [Manage NetBox resource Ethernet0 of type device_interface] *************** 2025-05-13 23:05:57.511922 | orchestrator | changed: [localhost] 2025-05-13 23:05:57.512093 | orchestrator | 2025-05-13 23:05:57.512124 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 23:05:57.512181 | orchestrator | 2025-05-13 23:05:57 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-13 23:05:57.512196 | orchestrator | 2025-05-13 23:05:57 | INFO  | Please wait and do not abort execution. 2025-05-13 23:05:57.512267 | orchestrator | localhost : ok=5 changed=5 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 23:05:58.573168 | orchestrator | changed: [localhost] 2025-05-13 23:05:58.575733 | orchestrator | 2025-05-13 23:05:58.575810 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 23:05:58.575928 | orchestrator | 2025-05-13 23:05:58 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-13 23:05:58.576388 | orchestrator | 2025-05-13 23:05:58 | INFO  | Please wait and do not abort execution. 2025-05-13 23:05:58.576729 | orchestrator | localhost : ok=10 changed=10 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 23:05:59.811557 | orchestrator | changed: [localhost] 2025-05-13 23:05:59.812082 | orchestrator | 2025-05-13 23:05:59.813172 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 23:05:59.813542 | orchestrator | 2025-05-13 23:05:59 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-13 23:05:59.813750 | orchestrator | 2025-05-13 23:05:59 | INFO  | Please wait and do not abort execution. 2025-05-13 23:05:59.815525 | orchestrator | localhost : ok=10 changed=10 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 23:06:00.055468 | orchestrator | 2025-05-13 23:06:00 | INFO  | Runtime: 157.5920s 2025-05-13 23:06:00.496272 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-05-13 23:06:00.698947 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-05-13 23:06:00.699079 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible 4 minutes ago Up 4 minutes (healthy) 2025-05-13 23:06:00.699094 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible 4 minutes ago Up 4 minutes (healthy) 2025-05-13 23:06:00.699106 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api 4 minutes ago Up 4 minutes (healthy) 192.168.16.5:8000->8000/tcp 2025-05-13 23:06:00.699138 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server 4 minutes ago Up 4 minutes (healthy) 8000/tcp 2025-05-13 23:06:00.699150 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat 4 minutes ago Up 4 minutes (healthy) 2025-05-13 23:06:00.699161 | orchestrator | manager-conductor-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" conductor 4 minutes ago Up 4 minutes (healthy) 2025-05-13 23:06:00.699172 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower 4 minutes ago Up 4 minutes (healthy) 2025-05-13 23:06:00.699183 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler 4 minutes ago Up 3 minutes (healthy) 2025-05-13 23:06:00.699194 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener 4 minutes ago Up 4 minutes (healthy) 2025-05-13 23:06:00.699205 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.7.2 "docker-entrypoint.s…" mariadb 4 minutes ago Up 4 minutes (healthy) 3306/tcp 2025-05-13 23:06:00.699216 | orchestrator | manager-netbox-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" netbox 4 minutes ago Up 4 minutes (healthy) 2025-05-13 23:06:00.699227 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack 4 minutes ago Up 4 minutes (healthy) 2025-05-13 23:06:00.699237 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.3-alpine "docker-entrypoint.s…" redis 4 minutes ago Up 4 minutes (healthy) 6379/tcp 2025-05-13 23:06:00.699248 | orchestrator | manager-watchdog-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" watchdog 4 minutes ago Up 4 minutes (healthy) 2025-05-13 23:06:00.699259 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible 4 minutes ago Up 4 minutes (healthy) 2025-05-13 23:06:00.699270 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes 4 minutes ago Up 4 minutes (healthy) 2025-05-13 23:06:00.699281 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient 4 minutes ago Up 4 minutes (healthy) 2025-05-13 23:06:00.707582 | orchestrator | + docker compose --project-directory /opt/netbox ps 2025-05-13 23:06:00.892936 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-05-13 23:06:00.893060 | orchestrator | netbox-netbox-1 registry.osism.tech/osism/netbox:v4.2.2 "/usr/bin/tini -- /o…" netbox 11 minutes ago Up 10 minutes (healthy) 2025-05-13 23:06:00.893075 | orchestrator | netbox-netbox-worker-1 registry.osism.tech/osism/netbox:v4.2.2 "/opt/netbox/venv/bi…" netbox-worker 11 minutes ago Up 6 minutes (healthy) 2025-05-13 23:06:00.893086 | orchestrator | netbox-postgres-1 registry.osism.tech/dockerhub/library/postgres:16.9-alpine "docker-entrypoint.s…" postgres 11 minutes ago Up 10 minutes (healthy) 5432/tcp 2025-05-13 23:06:00.893099 | orchestrator | netbox-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.3-alpine "docker-entrypoint.s…" redis 11 minutes ago Up 10 minutes (healthy) 6379/tcp 2025-05-13 23:06:00.901058 | orchestrator | ++ semver latest 7.0.0 2025-05-13 23:06:00.956679 | orchestrator | + [[ -1 -ge 0 ]] 2025-05-13 23:06:00.957531 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-05-13 23:06:00.957569 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-05-13 23:06:00.961926 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-05-13 23:06:02.707210 | orchestrator | 2025-05-13 23:06:02 | INFO  | Task e9650691-682f-489c-aaec-661f5ffc905f (resolvconf) was prepared for execution. 2025-05-13 23:06:02.707766 | orchestrator | 2025-05-13 23:06:02 | INFO  | It takes a moment until task e9650691-682f-489c-aaec-661f5ffc905f (resolvconf) has been started and output is visible here. 2025-05-13 23:06:06.698540 | orchestrator | 2025-05-13 23:06:06.698655 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-05-13 23:06:06.698986 | orchestrator | 2025-05-13 23:06:06.701476 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-13 23:06:06.702289 | orchestrator | Tuesday 13 May 2025 23:06:06 +0000 (0:00:00.135) 0:00:00.135 *********** 2025-05-13 23:06:10.187844 | orchestrator | ok: [testbed-manager] 2025-05-13 23:06:10.189355 | orchestrator | 2025-05-13 23:06:10.190121 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-05-13 23:06:10.191601 | orchestrator | Tuesday 13 May 2025 23:06:10 +0000 (0:00:03.492) 0:00:03.628 *********** 2025-05-13 23:06:10.244250 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:06:10.244548 | orchestrator | 2025-05-13 23:06:10.245296 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-05-13 23:06:10.246079 | orchestrator | Tuesday 13 May 2025 23:06:10 +0000 (0:00:00.056) 0:00:03.684 *********** 2025-05-13 23:06:10.341891 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-05-13 23:06:10.342814 | orchestrator | 2025-05-13 23:06:10.344535 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-05-13 23:06:10.345227 | orchestrator | Tuesday 13 May 2025 23:06:10 +0000 (0:00:00.097) 0:00:03.781 *********** 2025-05-13 23:06:10.435868 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-05-13 23:06:10.435969 | orchestrator | 2025-05-13 23:06:10.436455 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-05-13 23:06:10.437538 | orchestrator | Tuesday 13 May 2025 23:06:10 +0000 (0:00:00.093) 0:00:03.874 *********** 2025-05-13 23:06:11.558645 | orchestrator | ok: [testbed-manager] 2025-05-13 23:06:11.559545 | orchestrator | 2025-05-13 23:06:11.560606 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-05-13 23:06:11.561885 | orchestrator | Tuesday 13 May 2025 23:06:11 +0000 (0:00:01.121) 0:00:04.996 *********** 2025-05-13 23:06:11.622280 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:06:11.622432 | orchestrator | 2025-05-13 23:06:11.623429 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-05-13 23:06:11.625029 | orchestrator | Tuesday 13 May 2025 23:06:11 +0000 (0:00:00.065) 0:00:05.062 *********** 2025-05-13 23:06:12.111390 | orchestrator | ok: [testbed-manager] 2025-05-13 23:06:12.111603 | orchestrator | 2025-05-13 23:06:12.112586 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-05-13 23:06:12.112850 | orchestrator | Tuesday 13 May 2025 23:06:12 +0000 (0:00:00.489) 0:00:05.551 *********** 2025-05-13 23:06:12.191756 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:06:12.192505 | orchestrator | 2025-05-13 23:06:12.193991 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-05-13 23:06:12.194343 | orchestrator | Tuesday 13 May 2025 23:06:12 +0000 (0:00:00.079) 0:00:05.631 *********** 2025-05-13 23:06:12.765948 | orchestrator | changed: [testbed-manager] 2025-05-13 23:06:12.766208 | orchestrator | 2025-05-13 23:06:12.766800 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-05-13 23:06:12.767945 | orchestrator | Tuesday 13 May 2025 23:06:12 +0000 (0:00:00.575) 0:00:06.206 *********** 2025-05-13 23:06:13.936952 | orchestrator | changed: [testbed-manager] 2025-05-13 23:06:13.937675 | orchestrator | 2025-05-13 23:06:13.937944 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-05-13 23:06:13.938706 | orchestrator | Tuesday 13 May 2025 23:06:13 +0000 (0:00:01.169) 0:00:07.375 *********** 2025-05-13 23:06:14.945940 | orchestrator | ok: [testbed-manager] 2025-05-13 23:06:14.946101 | orchestrator | 2025-05-13 23:06:14.946120 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-05-13 23:06:14.946133 | orchestrator | Tuesday 13 May 2025 23:06:14 +0000 (0:00:01.006) 0:00:08.382 *********** 2025-05-13 23:06:15.024738 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-05-13 23:06:15.025390 | orchestrator | 2025-05-13 23:06:15.026431 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-05-13 23:06:15.027281 | orchestrator | Tuesday 13 May 2025 23:06:15 +0000 (0:00:00.082) 0:00:08.464 *********** 2025-05-13 23:06:16.286711 | orchestrator | changed: [testbed-manager] 2025-05-13 23:06:16.286883 | orchestrator | 2025-05-13 23:06:16.287867 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 23:06:16.288072 | orchestrator | 2025-05-13 23:06:16 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-13 23:06:16.288989 | orchestrator | 2025-05-13 23:06:16 | INFO  | Please wait and do not abort execution. 2025-05-13 23:06:16.291491 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-13 23:06:16.292547 | orchestrator | 2025-05-13 23:06:16.293675 | orchestrator | 2025-05-13 23:06:16.294495 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 23:06:16.295478 | orchestrator | Tuesday 13 May 2025 23:06:16 +0000 (0:00:01.259) 0:00:09.724 *********** 2025-05-13 23:06:16.296463 | orchestrator | =============================================================================== 2025-05-13 23:06:16.297419 | orchestrator | Gathering Facts --------------------------------------------------------- 3.49s 2025-05-13 23:06:16.298175 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.26s 2025-05-13 23:06:16.299279 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.17s 2025-05-13 23:06:16.299952 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.12s 2025-05-13 23:06:16.300875 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 1.01s 2025-05-13 23:06:16.301128 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.58s 2025-05-13 23:06:16.301725 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.49s 2025-05-13 23:06:16.302511 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.10s 2025-05-13 23:06:16.302957 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.09s 2025-05-13 23:06:16.303447 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2025-05-13 23:06:16.303893 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2025-05-13 23:06:16.304375 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.07s 2025-05-13 23:06:16.304802 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.06s 2025-05-13 23:06:16.795281 | orchestrator | + osism apply sshconfig 2025-05-13 23:06:18.511440 | orchestrator | 2025-05-13 23:06:18 | INFO  | Task ddcf2c36-26f4-48ce-ba00-0167768f73a4 (sshconfig) was prepared for execution. 2025-05-13 23:06:18.511550 | orchestrator | 2025-05-13 23:06:18 | INFO  | It takes a moment until task ddcf2c36-26f4-48ce-ba00-0167768f73a4 (sshconfig) has been started and output is visible here. 2025-05-13 23:06:22.594515 | orchestrator | 2025-05-13 23:06:22.594908 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-05-13 23:06:22.596065 | orchestrator | 2025-05-13 23:06:22.596529 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-05-13 23:06:22.598900 | orchestrator | Tuesday 13 May 2025 23:06:22 +0000 (0:00:00.171) 0:00:00.171 *********** 2025-05-13 23:06:23.169994 | orchestrator | ok: [testbed-manager] 2025-05-13 23:06:23.170580 | orchestrator | 2025-05-13 23:06:23.172016 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-05-13 23:06:23.172895 | orchestrator | Tuesday 13 May 2025 23:06:23 +0000 (0:00:00.577) 0:00:00.749 *********** 2025-05-13 23:06:23.696640 | orchestrator | changed: [testbed-manager] 2025-05-13 23:06:23.696857 | orchestrator | 2025-05-13 23:06:23.696922 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-05-13 23:06:23.699512 | orchestrator | Tuesday 13 May 2025 23:06:23 +0000 (0:00:00.526) 0:00:01.275 *********** 2025-05-13 23:06:29.489676 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-05-13 23:06:29.489790 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-05-13 23:06:29.490719 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-05-13 23:06:29.491581 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-05-13 23:06:29.492565 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-05-13 23:06:29.493510 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-05-13 23:06:29.494531 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-05-13 23:06:29.495844 | orchestrator | 2025-05-13 23:06:29.498149 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-05-13 23:06:29.498763 | orchestrator | Tuesday 13 May 2025 23:06:29 +0000 (0:00:05.791) 0:00:07.067 *********** 2025-05-13 23:06:29.560677 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:06:29.560835 | orchestrator | 2025-05-13 23:06:29.562449 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-05-13 23:06:29.563366 | orchestrator | Tuesday 13 May 2025 23:06:29 +0000 (0:00:00.073) 0:00:07.140 *********** 2025-05-13 23:06:30.169570 | orchestrator | changed: [testbed-manager] 2025-05-13 23:06:30.169674 | orchestrator | 2025-05-13 23:06:30.169755 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 23:06:30.169993 | orchestrator | 2025-05-13 23:06:30 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-13 23:06:30.170063 | orchestrator | 2025-05-13 23:06:30 | INFO  | Please wait and do not abort execution. 2025-05-13 23:06:30.171748 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-13 23:06:30.172170 | orchestrator | 2025-05-13 23:06:30.172730 | orchestrator | 2025-05-13 23:06:30.176716 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 23:06:30.176762 | orchestrator | Tuesday 13 May 2025 23:06:30 +0000 (0:00:00.608) 0:00:07.749 *********** 2025-05-13 23:06:30.176801 | orchestrator | =============================================================================== 2025-05-13 23:06:30.176814 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.79s 2025-05-13 23:06:30.176825 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.61s 2025-05-13 23:06:30.176836 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.58s 2025-05-13 23:06:30.176847 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.53s 2025-05-13 23:06:30.176858 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.07s 2025-05-13 23:06:30.680919 | orchestrator | + osism apply known-hosts 2025-05-13 23:06:32.405880 | orchestrator | 2025-05-13 23:06:32 | INFO  | Task 8644f3f9-7b1a-4539-9764-f413b1c306e4 (known-hosts) was prepared for execution. 2025-05-13 23:06:32.405989 | orchestrator | 2025-05-13 23:06:32 | INFO  | It takes a moment until task 8644f3f9-7b1a-4539-9764-f413b1c306e4 (known-hosts) has been started and output is visible here. 2025-05-13 23:06:36.383895 | orchestrator | 2025-05-13 23:06:36.385149 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-05-13 23:06:36.385411 | orchestrator | 2025-05-13 23:06:36.387526 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-05-13 23:06:36.388370 | orchestrator | Tuesday 13 May 2025 23:06:36 +0000 (0:00:00.186) 0:00:00.186 *********** 2025-05-13 23:06:42.509950 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-05-13 23:06:42.510205 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-05-13 23:06:42.510884 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-05-13 23:06:42.511643 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-05-13 23:06:42.512367 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-05-13 23:06:42.512966 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-05-13 23:06:42.513735 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-05-13 23:06:42.514283 | orchestrator | 2025-05-13 23:06:42.514879 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-05-13 23:06:42.515652 | orchestrator | Tuesday 13 May 2025 23:06:42 +0000 (0:00:06.127) 0:00:06.313 *********** 2025-05-13 23:06:42.683744 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-05-13 23:06:42.684655 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-05-13 23:06:42.685388 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-05-13 23:06:42.686577 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-05-13 23:06:42.687996 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-05-13 23:06:42.688827 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-05-13 23:06:42.689076 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-05-13 23:06:42.689748 | orchestrator | 2025-05-13 23:06:42.690215 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-13 23:06:42.690752 | orchestrator | Tuesday 13 May 2025 23:06:42 +0000 (0:00:00.173) 0:00:06.487 *********** 2025-05-13 23:06:43.953679 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCkaOMU86+YNBnpB0O9eck87NPfDPHOfiK8EgXdUThZ8F0Lqu7lFiA8by+s6hl0yKoBbqWTEWZK1TA8sl/zYin1DrQnTWcgwDgVeacgCewvfXmVSqrh+wycUJksa7vRC/LhNK3XtNbsOulVpYVACCtFeywlVJkwG+Kef9DQRViVo3amNIhUZUl52YqTudg04J9Odk7sAn3rkYh/1p2zhuxjqY6fmGQuN5lAhjtfWtNuDnvDxFRb7rcp/S4NBmdEkA1rHiyyejNKkzmdV81LD95a2hbqz+LhEMnRQfvapYraOIzc2s1VUJT5Obt6k7YMIQewHojcyb/9iFX+P4U1188d7XTrir6q49jvEHKvg8/R+f6Hh65YS8XD6A+h84ol/m7izETb4cvGBfwF2Y1miCguHZF8Hpk+pXJKpOtBDEJp0LGHRkqqL+J0SJT/6OUyxJsxuvVMZQhnhkMUl7ezS9wvVVQPVmLegZ0lt7Y0E0hT1PGyd97ua57G0YNK1h07SaE=) 2025-05-13 23:06:43.956476 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBC7rne5k8DDdPiVagNEGQMOFM2PfVUyg2i+I4J9kaSNmy+bQwgq00MFW4A45BJQyBwllFpwtQe14wqiHLa+CO5s=) 2025-05-13 23:06:43.956518 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIFcXPi9LN7VxU2vmU1gJQ5d37lqeQT+jfzFfEXx+Jy7) 2025-05-13 23:06:43.957283 | orchestrator | 2025-05-13 23:06:43.958202 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-13 23:06:43.958433 | orchestrator | Tuesday 13 May 2025 23:06:43 +0000 (0:00:01.270) 0:00:07.757 *********** 2025-05-13 23:06:45.086094 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCkz+ce5sK+Skajnl4l+MCrPy2OuuYyQ/zbOKkqYwdfcc+rdxrNFMRw/LjPOfDSTyQf9KXJ9N05DNfiEo+doepkYdpcYLAP6dn/e2qcHRjVhBm6xvwh7/Yzv+k01WW5tg71aHdPe6YAUyTQX3Z/UzqQfQdhaU7JRwzgIzk4pZX93Gr6gRTdFEE3QPJMA+azf0R5V+MFuH+acPou6NOgjq2KchXAUClgwOg17pfbZRRyQ8iW6O9WqzWMIrus1JBFv5WltdoxSHnf3KjHW+zAoky15Yhav8H46cVSS9D0ZBRMfSqWWXEQ2lzKrryaX1+gn3HOJDmdxbbQK53o9NSJo0Yw0enHHReWAY7TxtRQ4v4e+IeovFUqW/cEv+o9bowrZFVmdwyVVIHVZxQjk8n4qZwYb5Hf5v1CktApuItQMdYO3gUShhaG8kNkSTjUaFiU5JIgNocmPuldAoNPxUGQYZF8/3qQWCPp/r8j4N0GcrMFrYrgfSk2b2h42HDD+pvFc7M=) 2025-05-13 23:06:45.086241 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMvCWluQYIgU9xI1PjolX+VndWa9RG064Q4zJnkTELxAaonWAp2rXYU5HsaaVt1BADsKsQVoRJk6eaZjsVgf5Ms=) 2025-05-13 23:06:45.087395 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKOT5f6DBaSQ6nk+qyumvNuigyd5AyOITpoGTKJblZ2C) 2025-05-13 23:06:45.087779 | orchestrator | 2025-05-13 23:06:45.088806 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-13 23:06:45.089448 | orchestrator | Tuesday 13 May 2025 23:06:45 +0000 (0:00:01.131) 0:00:08.889 *********** 2025-05-13 23:06:46.203484 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICZvmAwASqTm52+eEC4SIWjuAS6yEdSUhw0T/AofUToh) 2025-05-13 23:06:46.204090 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDG10W2mQ0GserUCoe1fpB2NfjgEgi0QXxVSbt2fpp5cgg4ztqJxQyCdQCvfPA1HLP7S6nxgxB5qFJmV2q5trUXl2ynpNlVVtlmrksRm3I8atRaZcGK8RzaEkdSTIpJq1dDCrUQY5oY30cRpYJus8gMm4Li+iSjiO6IHN3q2D6jUbRxgmsfQCazuljMGPxCMcNDTnFxcZc3rjlWV6C3rIikcD1GKF2/oZUH1KaVrxKuVu/s96re3n90SpN/YpaP1h30VJYPcrx4egi823ksf4iA7KVJqiLGHCWe8EDDOe/2Ch0RjTfsEseKSCHFLvaLnIfDXU8OUGzOHcdgG5rN/hYp4+HgoqavEB5byJd965fy8RntFQud1EBXxCC4n9zW2YclfNtJjAQ2KjL177QHrlbJjk5FcfT5a+bktL0gM93IatVtPrDy/k2CMOk/ijaVT4bX3ti0gjEP8MSWZoHonagK21uOWOH/PtfgLNx733KuUlA8+RB3B3nskiBTaLlu/Ns=) 2025-05-13 23:06:46.205220 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCO/gf9np0gQvWobp+4PpVYMPJMzitHlneImDQzvuDRqspuHZmh7xECCldk0q6T4fSRmreLdMsJlvSQXxfFvwiQ=) 2025-05-13 23:06:46.205936 | orchestrator | 2025-05-13 23:06:46.206766 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-13 23:06:46.208209 | orchestrator | Tuesday 13 May 2025 23:06:46 +0000 (0:00:01.117) 0:00:10.007 *********** 2025-05-13 23:06:47.307626 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPnnAHK9kFvLYeiMD1zaJE2hJmbWde0cD6/vAbJKeH1peeON0vy7eWunJ/v1ODNLZ2Rrrgm0iUtbCF97kC7/B9Q=) 2025-05-13 23:06:47.308414 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJ+LbBgbVNki1U+mIw0hAf/UsB/wXJQ2V5C2uQs7LvCC) 2025-05-13 23:06:47.308943 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDMTyDCSkO8FR8E16/Sv4NAjqRkgmIsObZiNlb7jQQsD8shxCD1pEHEtl+lHsyj8hhruSi6BTzs+Al+b0ibdlcqXl24U3xBfofbJVkxbn2JGwipM0OmWJSIjArZlNx1Ft9+qsxMQGfx0+OYTMr/ElPjJrUrL/X2zqOVA856tNomFrwAICDuw4Uo2hD6MTmhsnmgqOg5CkYk/nMyyeL0JZUIiGkw4wcARzYpZEjaD61RTCHrDZTZPaTvepJKARkZ4qvUV/HOM/XaLlh/7UA7v15vDmcLrj9LXBwQ1YFKwg2JjwCaMFpHiXx4nUpz8IXr01rLPM/BNq/Hblh3K1acLTd0+B7hu2GIOJ4lqT5cQ9XEaqJGoCjNfGXXGEJsbZizIqaYPuCDbKcO478xhwSRIQGdSjVCvGzBa94lQoXezQLXDEgeUgYEBkUl1rlsW56vhTCr78vUbbQP7eKvqaJHAz3PShp5dn+ATDIYYTDrmPCdoYXpVi8eQt8vISNWXUjQxVM=) 2025-05-13 23:06:47.309589 | orchestrator | 2025-05-13 23:06:47.310143 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-13 23:06:47.310748 | orchestrator | Tuesday 13 May 2025 23:06:47 +0000 (0:00:01.102) 0:00:11.110 *********** 2025-05-13 23:06:48.428961 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDGKkgekIo9jGWjM41/+Cvz4uqtIscRPqY2WXxDEyTfP5/eumYuYpi0FjAAjojMkyOe7lu2mvIGn04QoLqo9a+pIM1THBFy9v2z1VGZvdm6w2uDH8Y6bf+K9aDjFF46XMTHDF9FaKGWfzFqhVdiKunpOFA4oI5mY/rlQe3KVo1AnwRy4Uz3zXHHCtVnU+l14AASMSaCfq7ITKqWOep02arHAxnJT70qI2LLc7AEF0gcpWRszBVvozV6Sy9JfmRbvp3JAGtMF0zzYOfKLv9Vkf6NJmWTLCmUyZ95eO8zEea00rNDDmN34rK0E62zuzYZ2yRpyiu5FbfqSAvC64oNJtzyzkLBlO8aSY9JVSHPNgdGmreQf9eH+q+bW/xz9zmJSWEfD6+GD4JAQLeUd6qYlNZX6mOFRpE6l9s8FjUdrMLBZTS9rGXKx0JoF1edAFlaDG6xuZWnqegPBh0NnvZor5pcZuhqy0BawiAkvIE+MdzpFRC0mbOpJabZ2/1fFo0G8mM=) 2025-05-13 23:06:48.429075 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMy6ePLIDs3j3MpUlfFllOF1fYRWaCvH+UAJ7eEdJgSL3zaIgoDGcb0YFZVhi2ludAk10dQgl9sm8eRl1PgV8yY=) 2025-05-13 23:06:48.429417 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINpSClNk/bo3WQqamkexZXA+/UdM/g7/xG0E4hv+aHYX) 2025-05-13 23:06:48.430566 | orchestrator | 2025-05-13 23:06:48.432019 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-13 23:06:48.432566 | orchestrator | Tuesday 13 May 2025 23:06:48 +0000 (0:00:01.122) 0:00:12.232 *********** 2025-05-13 23:06:49.485140 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINh7V59aLKNWVm8t2AkQM0U24z41wnGvw9HExnxcAAsJ) 2025-05-13 23:06:49.485406 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCqlaF9tsfoPKrlMefNlP5qHp5oTzGYMzeRUi2eWkiDE48Tsnv3EWoYvhHgOB+e4ac3Q06cdsdUyhlGH22A4rZ9NCuNATpHUDJHW3pcIz+OY+36PZVzF/aDJCiifx/GCr0bjwaYmyB0B0IkrXW6UShbWtFIXA56SfCpA9fjyDsKlYaPu8hJ/UKzpCx6u4t9kAOWMI2vFNJR1QA8JDTHPxymadaJsFn5JYt2JEiFlLhRtelSMC+SLZDkDw6UK4lxAu2PHQeZ23qIKVSPUoHAaB1k26ai5J+ZB4keH5s/da3JRLuOHtcfWtDQX/EseIsZ+7t5rFIMfNkwyf3/uGct7hBKIXrwgQRK0epgEG8wFyK886t+ntc8ekpIeipTclQ5AgY+lJfLdCIadQB3NLxgOC5T3BgtcDT3VMRtPlWgM6wNx3WOet8w5gsG707BNBSHh80phRdf3ClExYTCQv4mIHLABN36s0ESSGitcGS6nb8s0AR6olA8I+Dof2J6oR1kDa0=) 2025-05-13 23:06:49.487384 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOdTakDQ0m3z5z4Mpt4wR/I3Aw7Vb4F+Bwtrprt96nrl3L5oy6WFZ7qefoHxsKaahnAA9PAjWa9O6N3lM2wEyIg=) 2025-05-13 23:06:49.487901 | orchestrator | 2025-05-13 23:06:49.488698 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-13 23:06:49.489516 | orchestrator | Tuesday 13 May 2025 23:06:49 +0000 (0:00:01.054) 0:00:13.287 *********** 2025-05-13 23:06:50.592724 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDFvWrggBLCpI1yXR/hrE8+HYqHGZ72v0BzPEV3DC3zJ3qFbTtXg/QNXZMo8rukZTxFqqY++8wTHBISTHi2eiI5pHQ4Mp6gsOUFwvUcxfv0+7WRkMKzQKLcv2JIdxggabjTv/9dRcLDjhOKfIFp80yLczwVLMq/eB18CI9M29qn0Sr3g0H4YSu8AHmGrTFA/UR08U2tdXaFTovzga1XrygvUQdi9sEdESx97WpU2b1criiO6sMnMRqVqTE7rigt5Zk05OWRsEOty/4CP22cPRxn7wYRSGEUBFdE/Q3h4878DKBX1hVO+pL93V5DSo8XGf/wCBETNMqS8xqw2xhZsqgIOMqR+A5KoFGbqHGcp+PEUVr+CoouType2Zqt9C7moyhHnaUiiz2hh1V4CdFFF+NIes44nw2QIF15xzAq8RMbc6PA3GF+L52M6hlgcT2CNwF03AyifR79bg5lCxi8uNVGamY0rYNApKunTSImegUXBmVIdGS0STbPkAxON9vis8k=) 2025-05-13 23:06:50.592861 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBO71Jg5EcAoO9T3G71FxLQreJRBQt6llBFMdYkZlWUOkgLqaf/ZsxOwdOeyQSjYMGKKWRMwxpQ6nhwW1IBjvfdA=) 2025-05-13 23:06:50.593197 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFORCVnD3nVAVwlyd9ecDahim16Q74dojfHWzNLDN8Nv) 2025-05-13 23:06:50.593657 | orchestrator | 2025-05-13 23:06:50.594877 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-05-13 23:06:50.595193 | orchestrator | Tuesday 13 May 2025 23:06:50 +0000 (0:00:01.108) 0:00:14.396 *********** 2025-05-13 23:06:56.092668 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-05-13 23:06:56.092829 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-05-13 23:06:56.094248 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-05-13 23:06:56.095742 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-05-13 23:06:56.097027 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-05-13 23:06:56.097595 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-05-13 23:06:56.098154 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-05-13 23:06:56.099024 | orchestrator | 2025-05-13 23:06:56.099618 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-05-13 23:06:56.100451 | orchestrator | Tuesday 13 May 2025 23:06:56 +0000 (0:00:05.501) 0:00:19.897 *********** 2025-05-13 23:06:56.282473 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-05-13 23:06:56.283047 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-05-13 23:06:56.285159 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-05-13 23:06:56.286172 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-05-13 23:06:56.287493 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-05-13 23:06:56.288184 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-05-13 23:06:56.288686 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-05-13 23:06:56.289072 | orchestrator | 2025-05-13 23:06:56.289958 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-13 23:06:56.290464 | orchestrator | Tuesday 13 May 2025 23:06:56 +0000 (0:00:00.189) 0:00:20.087 *********** 2025-05-13 23:06:57.441724 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBC7rne5k8DDdPiVagNEGQMOFM2PfVUyg2i+I4J9kaSNmy+bQwgq00MFW4A45BJQyBwllFpwtQe14wqiHLa+CO5s=) 2025-05-13 23:06:57.443200 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCkaOMU86+YNBnpB0O9eck87NPfDPHOfiK8EgXdUThZ8F0Lqu7lFiA8by+s6hl0yKoBbqWTEWZK1TA8sl/zYin1DrQnTWcgwDgVeacgCewvfXmVSqrh+wycUJksa7vRC/LhNK3XtNbsOulVpYVACCtFeywlVJkwG+Kef9DQRViVo3amNIhUZUl52YqTudg04J9Odk7sAn3rkYh/1p2zhuxjqY6fmGQuN5lAhjtfWtNuDnvDxFRb7rcp/S4NBmdEkA1rHiyyejNKkzmdV81LD95a2hbqz+LhEMnRQfvapYraOIzc2s1VUJT5Obt6k7YMIQewHojcyb/9iFX+P4U1188d7XTrir6q49jvEHKvg8/R+f6Hh65YS8XD6A+h84ol/m7izETb4cvGBfwF2Y1miCguHZF8Hpk+pXJKpOtBDEJp0LGHRkqqL+J0SJT/6OUyxJsxuvVMZQhnhkMUl7ezS9wvVVQPVmLegZ0lt7Y0E0hT1PGyd97ua57G0YNK1h07SaE=) 2025-05-13 23:06:57.444507 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIFcXPi9LN7VxU2vmU1gJQ5d37lqeQT+jfzFfEXx+Jy7) 2025-05-13 23:06:57.445473 | orchestrator | 2025-05-13 23:06:57.445964 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-13 23:06:57.447059 | orchestrator | Tuesday 13 May 2025 23:06:57 +0000 (0:00:01.158) 0:00:21.246 *********** 2025-05-13 23:06:58.546614 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCkz+ce5sK+Skajnl4l+MCrPy2OuuYyQ/zbOKkqYwdfcc+rdxrNFMRw/LjPOfDSTyQf9KXJ9N05DNfiEo+doepkYdpcYLAP6dn/e2qcHRjVhBm6xvwh7/Yzv+k01WW5tg71aHdPe6YAUyTQX3Z/UzqQfQdhaU7JRwzgIzk4pZX93Gr6gRTdFEE3QPJMA+azf0R5V+MFuH+acPou6NOgjq2KchXAUClgwOg17pfbZRRyQ8iW6O9WqzWMIrus1JBFv5WltdoxSHnf3KjHW+zAoky15Yhav8H46cVSS9D0ZBRMfSqWWXEQ2lzKrryaX1+gn3HOJDmdxbbQK53o9NSJo0Yw0enHHReWAY7TxtRQ4v4e+IeovFUqW/cEv+o9bowrZFVmdwyVVIHVZxQjk8n4qZwYb5Hf5v1CktApuItQMdYO3gUShhaG8kNkSTjUaFiU5JIgNocmPuldAoNPxUGQYZF8/3qQWCPp/r8j4N0GcrMFrYrgfSk2b2h42HDD+pvFc7M=) 2025-05-13 23:06:58.546787 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMvCWluQYIgU9xI1PjolX+VndWa9RG064Q4zJnkTELxAaonWAp2rXYU5HsaaVt1BADsKsQVoRJk6eaZjsVgf5Ms=) 2025-05-13 23:06:58.547367 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKOT5f6DBaSQ6nk+qyumvNuigyd5AyOITpoGTKJblZ2C) 2025-05-13 23:06:58.548582 | orchestrator | 2025-05-13 23:06:58.548666 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-13 23:06:58.548732 | orchestrator | Tuesday 13 May 2025 23:06:58 +0000 (0:00:01.103) 0:00:22.349 *********** 2025-05-13 23:06:59.666347 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDG10W2mQ0GserUCoe1fpB2NfjgEgi0QXxVSbt2fpp5cgg4ztqJxQyCdQCvfPA1HLP7S6nxgxB5qFJmV2q5trUXl2ynpNlVVtlmrksRm3I8atRaZcGK8RzaEkdSTIpJq1dDCrUQY5oY30cRpYJus8gMm4Li+iSjiO6IHN3q2D6jUbRxgmsfQCazuljMGPxCMcNDTnFxcZc3rjlWV6C3rIikcD1GKF2/oZUH1KaVrxKuVu/s96re3n90SpN/YpaP1h30VJYPcrx4egi823ksf4iA7KVJqiLGHCWe8EDDOe/2Ch0RjTfsEseKSCHFLvaLnIfDXU8OUGzOHcdgG5rN/hYp4+HgoqavEB5byJd965fy8RntFQud1EBXxCC4n9zW2YclfNtJjAQ2KjL177QHrlbJjk5FcfT5a+bktL0gM93IatVtPrDy/k2CMOk/ijaVT4bX3ti0gjEP8MSWZoHonagK21uOWOH/PtfgLNx733KuUlA8+RB3B3nskiBTaLlu/Ns=) 2025-05-13 23:06:59.666444 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCO/gf9np0gQvWobp+4PpVYMPJMzitHlneImDQzvuDRqspuHZmh7xECCldk0q6T4fSRmreLdMsJlvSQXxfFvwiQ=) 2025-05-13 23:06:59.667401 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICZvmAwASqTm52+eEC4SIWjuAS6yEdSUhw0T/AofUToh) 2025-05-13 23:06:59.668172 | orchestrator | 2025-05-13 23:06:59.668578 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-13 23:06:59.669085 | orchestrator | Tuesday 13 May 2025 23:06:59 +0000 (0:00:01.121) 0:00:23.470 *********** 2025-05-13 23:07:00.768589 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDMTyDCSkO8FR8E16/Sv4NAjqRkgmIsObZiNlb7jQQsD8shxCD1pEHEtl+lHsyj8hhruSi6BTzs+Al+b0ibdlcqXl24U3xBfofbJVkxbn2JGwipM0OmWJSIjArZlNx1Ft9+qsxMQGfx0+OYTMr/ElPjJrUrL/X2zqOVA856tNomFrwAICDuw4Uo2hD6MTmhsnmgqOg5CkYk/nMyyeL0JZUIiGkw4wcARzYpZEjaD61RTCHrDZTZPaTvepJKARkZ4qvUV/HOM/XaLlh/7UA7v15vDmcLrj9LXBwQ1YFKwg2JjwCaMFpHiXx4nUpz8IXr01rLPM/BNq/Hblh3K1acLTd0+B7hu2GIOJ4lqT5cQ9XEaqJGoCjNfGXXGEJsbZizIqaYPuCDbKcO478xhwSRIQGdSjVCvGzBa94lQoXezQLXDEgeUgYEBkUl1rlsW56vhTCr78vUbbQP7eKvqaJHAz3PShp5dn+ATDIYYTDrmPCdoYXpVi8eQt8vISNWXUjQxVM=) 2025-05-13 23:07:00.769745 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPnnAHK9kFvLYeiMD1zaJE2hJmbWde0cD6/vAbJKeH1peeON0vy7eWunJ/v1ODNLZ2Rrrgm0iUtbCF97kC7/B9Q=) 2025-05-13 23:07:00.771146 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJ+LbBgbVNki1U+mIw0hAf/UsB/wXJQ2V5C2uQs7LvCC) 2025-05-13 23:07:00.772142 | orchestrator | 2025-05-13 23:07:00.773346 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-13 23:07:00.773728 | orchestrator | Tuesday 13 May 2025 23:07:00 +0000 (0:00:01.102) 0:00:24.573 *********** 2025-05-13 23:07:01.832124 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDGKkgekIo9jGWjM41/+Cvz4uqtIscRPqY2WXxDEyTfP5/eumYuYpi0FjAAjojMkyOe7lu2mvIGn04QoLqo9a+pIM1THBFy9v2z1VGZvdm6w2uDH8Y6bf+K9aDjFF46XMTHDF9FaKGWfzFqhVdiKunpOFA4oI5mY/rlQe3KVo1AnwRy4Uz3zXHHCtVnU+l14AASMSaCfq7ITKqWOep02arHAxnJT70qI2LLc7AEF0gcpWRszBVvozV6Sy9JfmRbvp3JAGtMF0zzYOfKLv9Vkf6NJmWTLCmUyZ95eO8zEea00rNDDmN34rK0E62zuzYZ2yRpyiu5FbfqSAvC64oNJtzyzkLBlO8aSY9JVSHPNgdGmreQf9eH+q+bW/xz9zmJSWEfD6+GD4JAQLeUd6qYlNZX6mOFRpE6l9s8FjUdrMLBZTS9rGXKx0JoF1edAFlaDG6xuZWnqegPBh0NnvZor5pcZuhqy0BawiAkvIE+MdzpFRC0mbOpJabZ2/1fFo0G8mM=) 2025-05-13 23:07:01.832984 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMy6ePLIDs3j3MpUlfFllOF1fYRWaCvH+UAJ7eEdJgSL3zaIgoDGcb0YFZVhi2ludAk10dQgl9sm8eRl1PgV8yY=) 2025-05-13 23:07:01.834129 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINpSClNk/bo3WQqamkexZXA+/UdM/g7/xG0E4hv+aHYX) 2025-05-13 23:07:01.835223 | orchestrator | 2025-05-13 23:07:01.836260 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-13 23:07:01.836823 | orchestrator | Tuesday 13 May 2025 23:07:01 +0000 (0:00:01.063) 0:00:25.636 *********** 2025-05-13 23:07:02.904748 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOdTakDQ0m3z5z4Mpt4wR/I3Aw7Vb4F+Bwtrprt96nrl3L5oy6WFZ7qefoHxsKaahnAA9PAjWa9O6N3lM2wEyIg=) 2025-05-13 23:07:02.905445 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCqlaF9tsfoPKrlMefNlP5qHp5oTzGYMzeRUi2eWkiDE48Tsnv3EWoYvhHgOB+e4ac3Q06cdsdUyhlGH22A4rZ9NCuNATpHUDJHW3pcIz+OY+36PZVzF/aDJCiifx/GCr0bjwaYmyB0B0IkrXW6UShbWtFIXA56SfCpA9fjyDsKlYaPu8hJ/UKzpCx6u4t9kAOWMI2vFNJR1QA8JDTHPxymadaJsFn5JYt2JEiFlLhRtelSMC+SLZDkDw6UK4lxAu2PHQeZ23qIKVSPUoHAaB1k26ai5J+ZB4keH5s/da3JRLuOHtcfWtDQX/EseIsZ+7t5rFIMfNkwyf3/uGct7hBKIXrwgQRK0epgEG8wFyK886t+ntc8ekpIeipTclQ5AgY+lJfLdCIadQB3NLxgOC5T3BgtcDT3VMRtPlWgM6wNx3WOet8w5gsG707BNBSHh80phRdf3ClExYTCQv4mIHLABN36s0ESSGitcGS6nb8s0AR6olA8I+Dof2J6oR1kDa0=) 2025-05-13 23:07:02.907010 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINh7V59aLKNWVm8t2AkQM0U24z41wnGvw9HExnxcAAsJ) 2025-05-13 23:07:02.907723 | orchestrator | 2025-05-13 23:07:02.908155 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-13 23:07:02.908808 | orchestrator | Tuesday 13 May 2025 23:07:02 +0000 (0:00:01.072) 0:00:26.708 *********** 2025-05-13 23:07:04.046966 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFORCVnD3nVAVwlyd9ecDahim16Q74dojfHWzNLDN8Nv) 2025-05-13 23:07:04.047079 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDFvWrggBLCpI1yXR/hrE8+HYqHGZ72v0BzPEV3DC3zJ3qFbTtXg/QNXZMo8rukZTxFqqY++8wTHBISTHi2eiI5pHQ4Mp6gsOUFwvUcxfv0+7WRkMKzQKLcv2JIdxggabjTv/9dRcLDjhOKfIFp80yLczwVLMq/eB18CI9M29qn0Sr3g0H4YSu8AHmGrTFA/UR08U2tdXaFTovzga1XrygvUQdi9sEdESx97WpU2b1criiO6sMnMRqVqTE7rigt5Zk05OWRsEOty/4CP22cPRxn7wYRSGEUBFdE/Q3h4878DKBX1hVO+pL93V5DSo8XGf/wCBETNMqS8xqw2xhZsqgIOMqR+A5KoFGbqHGcp+PEUVr+CoouType2Zqt9C7moyhHnaUiiz2hh1V4CdFFF+NIes44nw2QIF15xzAq8RMbc6PA3GF+L52M6hlgcT2CNwF03AyifR79bg5lCxi8uNVGamY0rYNApKunTSImegUXBmVIdGS0STbPkAxON9vis8k=) 2025-05-13 23:07:04.047161 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBO71Jg5EcAoO9T3G71FxLQreJRBQt6llBFMdYkZlWUOkgLqaf/ZsxOwdOeyQSjYMGKKWRMwxpQ6nhwW1IBjvfdA=) 2025-05-13 23:07:04.047524 | orchestrator | 2025-05-13 23:07:04.047542 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-05-13 23:07:04.047794 | orchestrator | Tuesday 13 May 2025 23:07:04 +0000 (0:00:01.143) 0:00:27.852 *********** 2025-05-13 23:07:04.451886 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-05-13 23:07:04.453574 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-05-13 23:07:04.455124 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-05-13 23:07:04.456058 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-05-13 23:07:04.456783 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-05-13 23:07:04.457750 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-05-13 23:07:04.458476 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-05-13 23:07:04.459107 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:07:04.459780 | orchestrator | 2025-05-13 23:07:04.460610 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-05-13 23:07:04.461123 | orchestrator | Tuesday 13 May 2025 23:07:04 +0000 (0:00:00.404) 0:00:28.256 *********** 2025-05-13 23:07:04.512532 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:07:04.515912 | orchestrator | 2025-05-13 23:07:04.515972 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-05-13 23:07:04.516976 | orchestrator | Tuesday 13 May 2025 23:07:04 +0000 (0:00:00.061) 0:00:28.317 *********** 2025-05-13 23:07:04.578997 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:07:04.580709 | orchestrator | 2025-05-13 23:07:04.582359 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-05-13 23:07:04.583116 | orchestrator | Tuesday 13 May 2025 23:07:04 +0000 (0:00:00.066) 0:00:28.384 *********** 2025-05-13 23:07:05.093912 | orchestrator | changed: [testbed-manager] 2025-05-13 23:07:05.094231 | orchestrator | 2025-05-13 23:07:05.094981 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 23:07:05.095219 | orchestrator | 2025-05-13 23:07:05 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-13 23:07:05.095256 | orchestrator | 2025-05-13 23:07:05 | INFO  | Please wait and do not abort execution. 2025-05-13 23:07:05.097211 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-13 23:07:05.098658 | orchestrator | 2025-05-13 23:07:05.100041 | orchestrator | 2025-05-13 23:07:05.101198 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 23:07:05.101980 | orchestrator | Tuesday 13 May 2025 23:07:05 +0000 (0:00:00.515) 0:00:28.899 *********** 2025-05-13 23:07:05.103215 | orchestrator | =============================================================================== 2025-05-13 23:07:05.104239 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.13s 2025-05-13 23:07:05.104719 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.50s 2025-05-13 23:07:05.105720 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.27s 2025-05-13 23:07:05.106377 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.16s 2025-05-13 23:07:05.106967 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2025-05-13 23:07:05.107601 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2025-05-13 23:07:05.109578 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2025-05-13 23:07:05.110174 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2025-05-13 23:07:05.110799 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2025-05-13 23:07:05.111416 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2025-05-13 23:07:05.111926 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2025-05-13 23:07:05.112681 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2025-05-13 23:07:05.113493 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2025-05-13 23:07:05.114206 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-05-13 23:07:05.114921 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-05-13 23:07:05.115795 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-05-13 23:07:05.116575 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.52s 2025-05-13 23:07:05.117134 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.40s 2025-05-13 23:07:05.117726 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.19s 2025-05-13 23:07:05.118363 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.17s 2025-05-13 23:07:05.576422 | orchestrator | + osism apply squid 2025-05-13 23:07:07.279828 | orchestrator | 2025-05-13 23:07:07 | INFO  | Task 25a963b7-526a-4e5c-a7bf-f42e3cea7c71 (squid) was prepared for execution. 2025-05-13 23:07:07.279945 | orchestrator | 2025-05-13 23:07:07 | INFO  | It takes a moment until task 25a963b7-526a-4e5c-a7bf-f42e3cea7c71 (squid) has been started and output is visible here. 2025-05-13 23:07:11.375175 | orchestrator | 2025-05-13 23:07:11.376566 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-05-13 23:07:11.377678 | orchestrator | 2025-05-13 23:07:11.378787 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-05-13 23:07:11.380060 | orchestrator | Tuesday 13 May 2025 23:07:11 +0000 (0:00:00.177) 0:00:00.177 *********** 2025-05-13 23:07:11.472402 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-05-13 23:07:11.472872 | orchestrator | 2025-05-13 23:07:11.473918 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-05-13 23:07:11.475026 | orchestrator | Tuesday 13 May 2025 23:07:11 +0000 (0:00:00.104) 0:00:00.282 *********** 2025-05-13 23:07:12.894118 | orchestrator | ok: [testbed-manager] 2025-05-13 23:07:12.894578 | orchestrator | 2025-05-13 23:07:12.895137 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-05-13 23:07:12.895864 | orchestrator | Tuesday 13 May 2025 23:07:12 +0000 (0:00:01.420) 0:00:01.703 *********** 2025-05-13 23:07:14.104876 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-05-13 23:07:14.105448 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-05-13 23:07:14.105768 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-05-13 23:07:14.107493 | orchestrator | 2025-05-13 23:07:14.107941 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-05-13 23:07:14.108100 | orchestrator | Tuesday 13 May 2025 23:07:14 +0000 (0:00:01.211) 0:00:02.914 *********** 2025-05-13 23:07:15.231388 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-05-13 23:07:15.233629 | orchestrator | 2025-05-13 23:07:15.233780 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-05-13 23:07:15.235112 | orchestrator | Tuesday 13 May 2025 23:07:15 +0000 (0:00:01.125) 0:00:04.040 *********** 2025-05-13 23:07:15.599192 | orchestrator | ok: [testbed-manager] 2025-05-13 23:07:15.599436 | orchestrator | 2025-05-13 23:07:15.600487 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-05-13 23:07:15.601445 | orchestrator | Tuesday 13 May 2025 23:07:15 +0000 (0:00:00.365) 0:00:04.406 *********** 2025-05-13 23:07:16.551408 | orchestrator | changed: [testbed-manager] 2025-05-13 23:07:16.551574 | orchestrator | 2025-05-13 23:07:16.551995 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-05-13 23:07:16.552368 | orchestrator | Tuesday 13 May 2025 23:07:16 +0000 (0:00:00.954) 0:00:05.360 *********** 2025-05-13 23:07:48.285673 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-05-13 23:07:48.285792 | orchestrator | ok: [testbed-manager] 2025-05-13 23:07:48.285810 | orchestrator | 2025-05-13 23:07:48.285823 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-05-13 23:07:48.285836 | orchestrator | Tuesday 13 May 2025 23:07:48 +0000 (0:00:31.730) 0:00:37.091 *********** 2025-05-13 23:08:00.647579 | orchestrator | changed: [testbed-manager] 2025-05-13 23:08:00.647703 | orchestrator | 2025-05-13 23:08:00.647863 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-05-13 23:08:00.649961 | orchestrator | Tuesday 13 May 2025 23:08:00 +0000 (0:00:12.360) 0:00:49.451 *********** 2025-05-13 23:09:00.721157 | orchestrator | Pausing for 60 seconds 2025-05-13 23:09:00.721324 | orchestrator | changed: [testbed-manager] 2025-05-13 23:09:00.723069 | orchestrator | 2025-05-13 23:09:00.724090 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-05-13 23:09:00.724837 | orchestrator | Tuesday 13 May 2025 23:09:00 +0000 (0:01:00.076) 0:01:49.528 *********** 2025-05-13 23:09:00.805854 | orchestrator | ok: [testbed-manager] 2025-05-13 23:09:00.806046 | orchestrator | 2025-05-13 23:09:00.806957 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-05-13 23:09:00.808053 | orchestrator | Tuesday 13 May 2025 23:09:00 +0000 (0:00:00.087) 0:01:49.616 *********** 2025-05-13 23:09:01.474509 | orchestrator | changed: [testbed-manager] 2025-05-13 23:09:01.476745 | orchestrator | 2025-05-13 23:09:01.478591 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 23:09:01.478674 | orchestrator | 2025-05-13 23:09:01 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-13 23:09:01.478736 | orchestrator | 2025-05-13 23:09:01 | INFO  | Please wait and do not abort execution. 2025-05-13 23:09:01.481269 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 23:09:01.482123 | orchestrator | 2025-05-13 23:09:01.482632 | orchestrator | 2025-05-13 23:09:01.482982 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 23:09:01.483073 | orchestrator | Tuesday 13 May 2025 23:09:01 +0000 (0:00:00.668) 0:01:50.284 *********** 2025-05-13 23:09:01.483712 | orchestrator | =============================================================================== 2025-05-13 23:09:01.483996 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.08s 2025-05-13 23:09:01.484758 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 31.73s 2025-05-13 23:09:01.485768 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.36s 2025-05-13 23:09:01.487315 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.42s 2025-05-13 23:09:01.487764 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.21s 2025-05-13 23:09:01.487925 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.13s 2025-05-13 23:09:01.488698 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.95s 2025-05-13 23:09:01.489128 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.67s 2025-05-13 23:09:01.489366 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.37s 2025-05-13 23:09:01.490186 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.10s 2025-05-13 23:09:01.490723 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.09s 2025-05-13 23:09:02.021905 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-05-13 23:09:02.022743 | orchestrator | ++ semver latest 9.0.0 2025-05-13 23:09:02.069733 | orchestrator | + [[ -1 -lt 0 ]] 2025-05-13 23:09:02.069819 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-05-13 23:09:02.071313 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-05-13 23:09:03.806366 | orchestrator | 2025-05-13 23:09:03 | INFO  | Task 27344446-9dda-4de3-9be2-6b6512bc7302 (operator) was prepared for execution. 2025-05-13 23:09:03.806465 | orchestrator | 2025-05-13 23:09:03 | INFO  | It takes a moment until task 27344446-9dda-4de3-9be2-6b6512bc7302 (operator) has been started and output is visible here. 2025-05-13 23:09:07.801376 | orchestrator | 2025-05-13 23:09:07.803573 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-05-13 23:09:07.804579 | orchestrator | 2025-05-13 23:09:07.805931 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-13 23:09:07.807228 | orchestrator | Tuesday 13 May 2025 23:09:07 +0000 (0:00:00.135) 0:00:00.135 *********** 2025-05-13 23:09:12.108534 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:09:12.109009 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:09:12.109972 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:09:12.110773 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:09:12.111789 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:09:12.112476 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:09:12.113621 | orchestrator | 2025-05-13 23:09:12.114275 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-05-13 23:09:12.114875 | orchestrator | Tuesday 13 May 2025 23:09:12 +0000 (0:00:04.308) 0:00:04.444 *********** 2025-05-13 23:09:12.889539 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:09:12.890422 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:09:12.891402 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:09:12.891627 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:09:12.892870 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:09:12.893567 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:09:12.894906 | orchestrator | 2025-05-13 23:09:12.895909 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-05-13 23:09:12.896033 | orchestrator | 2025-05-13 23:09:12.896762 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-05-13 23:09:12.897771 | orchestrator | Tuesday 13 May 2025 23:09:12 +0000 (0:00:00.780) 0:00:05.224 *********** 2025-05-13 23:09:12.991723 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:09:13.013981 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:09:13.047076 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:09:13.091991 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:09:13.092185 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:09:13.092563 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:09:13.092855 | orchestrator | 2025-05-13 23:09:13.093396 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-05-13 23:09:13.093679 | orchestrator | Tuesday 13 May 2025 23:09:13 +0000 (0:00:00.201) 0:00:05.426 *********** 2025-05-13 23:09:13.165757 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:09:13.188736 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:09:13.215486 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:09:13.273651 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:09:13.273857 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:09:13.274327 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:09:13.275020 | orchestrator | 2025-05-13 23:09:13.275602 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-05-13 23:09:13.276141 | orchestrator | Tuesday 13 May 2025 23:09:13 +0000 (0:00:00.183) 0:00:05.609 *********** 2025-05-13 23:09:13.896852 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:09:13.898004 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:09:13.898724 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:09:13.899610 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:09:13.900316 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:09:13.901759 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:09:13.902956 | orchestrator | 2025-05-13 23:09:13.905655 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-05-13 23:09:13.906802 | orchestrator | Tuesday 13 May 2025 23:09:13 +0000 (0:00:00.621) 0:00:06.231 *********** 2025-05-13 23:09:14.732421 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:09:14.732542 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:09:14.732567 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:09:14.732912 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:09:14.733940 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:09:14.735251 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:09:14.735657 | orchestrator | 2025-05-13 23:09:14.736256 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-05-13 23:09:14.736896 | orchestrator | Tuesday 13 May 2025 23:09:14 +0000 (0:00:00.833) 0:00:07.065 *********** 2025-05-13 23:09:15.941430 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-05-13 23:09:15.942254 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-05-13 23:09:15.944226 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-05-13 23:09:15.945177 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-05-13 23:09:15.946170 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-05-13 23:09:15.947293 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-05-13 23:09:15.948311 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-05-13 23:09:15.949437 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-05-13 23:09:15.950648 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-05-13 23:09:15.951170 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-05-13 23:09:15.951899 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-05-13 23:09:15.957706 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-05-13 23:09:15.958590 | orchestrator | 2025-05-13 23:09:15.959304 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-05-13 23:09:15.959934 | orchestrator | Tuesday 13 May 2025 23:09:15 +0000 (0:00:01.209) 0:00:08.274 *********** 2025-05-13 23:09:17.300128 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:09:17.300882 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:09:17.302145 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:09:17.303560 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:09:17.304461 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:09:17.304919 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:09:17.306836 | orchestrator | 2025-05-13 23:09:17.307315 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-05-13 23:09:17.308336 | orchestrator | Tuesday 13 May 2025 23:09:17 +0000 (0:00:01.359) 0:00:09.634 *********** 2025-05-13 23:09:18.499386 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-05-13 23:09:18.499789 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-05-13 23:09:18.500730 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-05-13 23:09:18.597826 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-05-13 23:09:18.598605 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-05-13 23:09:18.602782 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-05-13 23:09:18.602885 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-05-13 23:09:18.602898 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-05-13 23:09:18.602908 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-05-13 23:09:18.603676 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-05-13 23:09:18.604714 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-05-13 23:09:18.605432 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-05-13 23:09:18.606060 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-05-13 23:09:18.607131 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-05-13 23:09:18.607741 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-05-13 23:09:18.608421 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-05-13 23:09:18.609071 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-05-13 23:09:18.609558 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-05-13 23:09:18.610637 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-05-13 23:09:18.611079 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-05-13 23:09:18.611724 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-05-13 23:09:18.612484 | orchestrator | 2025-05-13 23:09:18.613065 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-05-13 23:09:18.613499 | orchestrator | Tuesday 13 May 2025 23:09:18 +0000 (0:00:01.299) 0:00:10.933 *********** 2025-05-13 23:09:19.183831 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:09:19.183935 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:09:19.184423 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:09:19.185296 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:09:19.186136 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:09:19.186566 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:09:19.186909 | orchestrator | 2025-05-13 23:09:19.187736 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-05-13 23:09:19.188559 | orchestrator | Tuesday 13 May 2025 23:09:19 +0000 (0:00:00.585) 0:00:11.519 *********** 2025-05-13 23:09:19.268058 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:09:19.290443 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:09:19.322851 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:09:19.387857 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:09:19.388814 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:09:19.389543 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:09:19.390437 | orchestrator | 2025-05-13 23:09:19.392644 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-05-13 23:09:19.393156 | orchestrator | Tuesday 13 May 2025 23:09:19 +0000 (0:00:00.203) 0:00:11.722 *********** 2025-05-13 23:09:20.105558 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-13 23:09:20.105660 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:09:20.106273 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-05-13 23:09:20.106379 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:09:20.106920 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-13 23:09:20.107613 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:09:20.108604 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-05-13 23:09:20.108648 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:09:20.109278 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-13 23:09:20.109486 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:09:20.110286 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-13 23:09:20.111482 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:09:20.111541 | orchestrator | 2025-05-13 23:09:20.111714 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-05-13 23:09:20.112235 | orchestrator | Tuesday 13 May 2025 23:09:20 +0000 (0:00:00.716) 0:00:12.439 *********** 2025-05-13 23:09:20.162980 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:09:20.224272 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:09:20.249890 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:09:20.286505 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:09:20.287401 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:09:20.289492 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:09:20.290468 | orchestrator | 2025-05-13 23:09:20.293411 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-05-13 23:09:20.294653 | orchestrator | Tuesday 13 May 2025 23:09:20 +0000 (0:00:00.182) 0:00:12.621 *********** 2025-05-13 23:09:20.356861 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:09:20.394746 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:09:20.427774 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:09:20.457548 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:09:20.499142 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:09:20.500792 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:09:20.504463 | orchestrator | 2025-05-13 23:09:20.504493 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-05-13 23:09:20.505915 | orchestrator | Tuesday 13 May 2025 23:09:20 +0000 (0:00:00.213) 0:00:12.834 *********** 2025-05-13 23:09:20.597606 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:09:20.628258 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:09:20.664039 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:09:20.711250 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:09:20.711339 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:09:20.711347 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:09:20.711355 | orchestrator | 2025-05-13 23:09:20.712396 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-05-13 23:09:20.712424 | orchestrator | Tuesday 13 May 2025 23:09:20 +0000 (0:00:00.208) 0:00:13.043 *********** 2025-05-13 23:09:21.403614 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:09:21.405351 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:09:21.405878 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:09:21.406981 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:09:21.407681 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:09:21.407876 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:09:21.408340 | orchestrator | 2025-05-13 23:09:21.409604 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-05-13 23:09:21.410524 | orchestrator | Tuesday 13 May 2025 23:09:21 +0000 (0:00:00.694) 0:00:13.737 *********** 2025-05-13 23:09:21.508013 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:09:21.544397 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:09:21.560151 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:09:21.679301 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:09:21.679482 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:09:21.680110 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:09:21.680426 | orchestrator | 2025-05-13 23:09:21.682301 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 23:09:21.683436 | orchestrator | 2025-05-13 23:09:21 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-13 23:09:21.684618 | orchestrator | 2025-05-13 23:09:21 | INFO  | Please wait and do not abort execution. 2025-05-13 23:09:21.686122 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-13 23:09:21.687694 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-13 23:09:21.688806 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-13 23:09:21.689482 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-13 23:09:21.690138 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-13 23:09:21.690846 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-13 23:09:21.691384 | orchestrator | 2025-05-13 23:09:21.691830 | orchestrator | 2025-05-13 23:09:21.692466 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 23:09:21.692998 | orchestrator | Tuesday 13 May 2025 23:09:21 +0000 (0:00:00.278) 0:00:14.015 *********** 2025-05-13 23:09:21.693730 | orchestrator | =============================================================================== 2025-05-13 23:09:21.694191 | orchestrator | Gathering Facts --------------------------------------------------------- 4.31s 2025-05-13 23:09:21.694731 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.36s 2025-05-13 23:09:21.695295 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.30s 2025-05-13 23:09:21.695803 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.21s 2025-05-13 23:09:21.696174 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.83s 2025-05-13 23:09:21.696669 | orchestrator | Do not require tty for all users ---------------------------------------- 0.78s 2025-05-13 23:09:21.697154 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.72s 2025-05-13 23:09:21.697705 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.69s 2025-05-13 23:09:21.698297 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.62s 2025-05-13 23:09:21.698732 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.59s 2025-05-13 23:09:21.699298 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.28s 2025-05-13 23:09:21.699712 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.21s 2025-05-13 23:09:21.700171 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.21s 2025-05-13 23:09:21.700586 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.20s 2025-05-13 23:09:21.701001 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.20s 2025-05-13 23:09:21.701588 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.18s 2025-05-13 23:09:21.702009 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.18s 2025-05-13 23:09:22.192035 | orchestrator | + osism apply --environment custom facts 2025-05-13 23:09:23.925955 | orchestrator | 2025-05-13 23:09:23 | INFO  | Trying to run play facts in environment custom 2025-05-13 23:09:23.993602 | orchestrator | 2025-05-13 23:09:23 | INFO  | Task a2a97773-e9cb-474d-b787-31ef75b10a1e (facts) was prepared for execution. 2025-05-13 23:09:23.993698 | orchestrator | 2025-05-13 23:09:23 | INFO  | It takes a moment until task a2a97773-e9cb-474d-b787-31ef75b10a1e (facts) has been started and output is visible here. 2025-05-13 23:09:27.969308 | orchestrator | 2025-05-13 23:09:27.970709 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-05-13 23:09:27.973429 | orchestrator | 2025-05-13 23:09:27.974511 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-05-13 23:09:27.979009 | orchestrator | Tuesday 13 May 2025 23:09:27 +0000 (0:00:00.090) 0:00:00.090 *********** 2025-05-13 23:09:29.360146 | orchestrator | ok: [testbed-manager] 2025-05-13 23:09:29.360501 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:09:29.361570 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:09:29.363272 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:09:29.364796 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:09:29.366184 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:09:29.367143 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:09:29.369014 | orchestrator | 2025-05-13 23:09:29.369406 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-05-13 23:09:29.370512 | orchestrator | Tuesday 13 May 2025 23:09:29 +0000 (0:00:01.400) 0:00:01.490 *********** 2025-05-13 23:09:30.673165 | orchestrator | ok: [testbed-manager] 2025-05-13 23:09:30.673418 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:09:30.674851 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:09:30.676110 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:09:30.676949 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:09:30.678696 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:09:30.680639 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:09:30.682874 | orchestrator | 2025-05-13 23:09:30.682928 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-05-13 23:09:30.683020 | orchestrator | 2025-05-13 23:09:30.684813 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-05-13 23:09:30.686262 | orchestrator | Tuesday 13 May 2025 23:09:30 +0000 (0:00:01.314) 0:00:02.805 *********** 2025-05-13 23:09:30.790914 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:09:30.792897 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:09:30.792960 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:09:30.792974 | orchestrator | 2025-05-13 23:09:30.796549 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-05-13 23:09:30.797426 | orchestrator | Tuesday 13 May 2025 23:09:30 +0000 (0:00:00.119) 0:00:02.924 *********** 2025-05-13 23:09:30.985518 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:09:30.986403 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:09:30.987410 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:09:30.987903 | orchestrator | 2025-05-13 23:09:30.988993 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-05-13 23:09:30.990385 | orchestrator | Tuesday 13 May 2025 23:09:30 +0000 (0:00:00.195) 0:00:03.120 *********** 2025-05-13 23:09:31.177152 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:09:31.179708 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:09:31.186222 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:09:31.186409 | orchestrator | 2025-05-13 23:09:31.186483 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-05-13 23:09:31.191098 | orchestrator | Tuesday 13 May 2025 23:09:31 +0000 (0:00:00.191) 0:00:03.311 *********** 2025-05-13 23:09:31.326922 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:09:31.327020 | orchestrator | 2025-05-13 23:09:31.327037 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-05-13 23:09:31.332067 | orchestrator | Tuesday 13 May 2025 23:09:31 +0000 (0:00:00.148) 0:00:03.459 *********** 2025-05-13 23:09:31.769234 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:09:31.769538 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:09:31.771542 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:09:31.775799 | orchestrator | 2025-05-13 23:09:31.776686 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-05-13 23:09:31.777440 | orchestrator | Tuesday 13 May 2025 23:09:31 +0000 (0:00:00.444) 0:00:03.904 *********** 2025-05-13 23:09:31.885789 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:09:31.886263 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:09:31.886915 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:09:31.887473 | orchestrator | 2025-05-13 23:09:31.888460 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-05-13 23:09:31.888778 | orchestrator | Tuesday 13 May 2025 23:09:31 +0000 (0:00:00.116) 0:00:04.021 *********** 2025-05-13 23:09:33.062815 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:09:33.066863 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:09:33.069523 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:09:33.072450 | orchestrator | 2025-05-13 23:09:33.073383 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-05-13 23:09:33.074266 | orchestrator | Tuesday 13 May 2025 23:09:33 +0000 (0:00:01.173) 0:00:05.194 *********** 2025-05-13 23:09:33.584144 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:09:33.584691 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:09:33.585730 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:09:33.586855 | orchestrator | 2025-05-13 23:09:33.587928 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-05-13 23:09:33.589448 | orchestrator | Tuesday 13 May 2025 23:09:33 +0000 (0:00:00.524) 0:00:05.718 *********** 2025-05-13 23:09:34.658389 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:09:34.658527 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:09:34.658554 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:09:34.658574 | orchestrator | 2025-05-13 23:09:34.658594 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-05-13 23:09:34.658638 | orchestrator | Tuesday 13 May 2025 23:09:34 +0000 (0:00:01.069) 0:00:06.788 *********** 2025-05-13 23:09:48.249905 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:09:48.250082 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:09:48.250101 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:09:48.250712 | orchestrator | 2025-05-13 23:09:48.251340 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-05-13 23:09:48.252346 | orchestrator | Tuesday 13 May 2025 23:09:48 +0000 (0:00:13.591) 0:00:20.379 *********** 2025-05-13 23:09:48.341512 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:09:48.341718 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:09:48.342268 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:09:48.343662 | orchestrator | 2025-05-13 23:09:48.343676 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-05-13 23:09:48.344401 | orchestrator | Tuesday 13 May 2025 23:09:48 +0000 (0:00:00.096) 0:00:20.476 *********** 2025-05-13 23:09:55.349395 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:09:55.350148 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:09:55.350271 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:09:55.351950 | orchestrator | 2025-05-13 23:09:55.353525 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-05-13 23:09:55.354460 | orchestrator | Tuesday 13 May 2025 23:09:55 +0000 (0:00:07.005) 0:00:27.482 *********** 2025-05-13 23:09:55.861632 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:09:55.862125 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:09:55.862611 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:09:55.863432 | orchestrator | 2025-05-13 23:09:55.864960 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-05-13 23:09:55.865780 | orchestrator | Tuesday 13 May 2025 23:09:55 +0000 (0:00:00.511) 0:00:27.993 *********** 2025-05-13 23:09:59.454957 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-05-13 23:09:59.455533 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-05-13 23:09:59.456503 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-05-13 23:09:59.458524 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-05-13 23:09:59.459013 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-05-13 23:09:59.460012 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-05-13 23:09:59.460655 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-05-13 23:09:59.461102 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-05-13 23:09:59.461724 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-05-13 23:09:59.462220 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-05-13 23:09:59.463397 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-05-13 23:09:59.464145 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-05-13 23:09:59.464219 | orchestrator | 2025-05-13 23:09:59.464637 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-05-13 23:09:59.465851 | orchestrator | Tuesday 13 May 2025 23:09:59 +0000 (0:00:03.591) 0:00:31.585 *********** 2025-05-13 23:10:00.681802 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:10:00.683171 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:10:00.684987 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:10:00.686585 | orchestrator | 2025-05-13 23:10:00.689262 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-13 23:10:00.689876 | orchestrator | 2025-05-13 23:10:00.690996 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-13 23:10:00.692376 | orchestrator | Tuesday 13 May 2025 23:10:00 +0000 (0:00:01.227) 0:00:32.813 *********** 2025-05-13 23:10:05.381714 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:10:05.381912 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:10:05.382645 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:10:05.382779 | orchestrator | ok: [testbed-manager] 2025-05-13 23:10:05.383270 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:10:05.383868 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:10:05.384278 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:10:05.384675 | orchestrator | 2025-05-13 23:10:05.385148 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 23:10:05.385532 | orchestrator | 2025-05-13 23:10:05 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-13 23:10:05.385555 | orchestrator | 2025-05-13 23:10:05 | INFO  | Please wait and do not abort execution. 2025-05-13 23:10:05.386157 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 23:10:05.386853 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 23:10:05.387430 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 23:10:05.387463 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 23:10:05.388037 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-13 23:10:05.388410 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-13 23:10:05.388702 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-13 23:10:05.389324 | orchestrator | 2025-05-13 23:10:05.389918 | orchestrator | 2025-05-13 23:10:05.390291 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 23:10:05.390745 | orchestrator | Tuesday 13 May 2025 23:10:05 +0000 (0:00:04.702) 0:00:37.515 *********** 2025-05-13 23:10:05.391474 | orchestrator | =============================================================================== 2025-05-13 23:10:05.392645 | orchestrator | osism.commons.repository : Update package cache ------------------------ 13.59s 2025-05-13 23:10:05.392692 | orchestrator | Install required packages (Debian) -------------------------------------- 7.01s 2025-05-13 23:10:05.392713 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.70s 2025-05-13 23:10:05.392804 | orchestrator | Copy fact files --------------------------------------------------------- 3.59s 2025-05-13 23:10:05.393013 | orchestrator | Create custom facts directory ------------------------------------------- 1.40s 2025-05-13 23:10:05.393548 | orchestrator | Copy fact file ---------------------------------------------------------- 1.31s 2025-05-13 23:10:05.394376 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.23s 2025-05-13 23:10:05.394528 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.17s 2025-05-13 23:10:05.394828 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.07s 2025-05-13 23:10:05.395212 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.52s 2025-05-13 23:10:05.395739 | orchestrator | Create custom facts directory ------------------------------------------- 0.51s 2025-05-13 23:10:05.395873 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.44s 2025-05-13 23:10:05.396461 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.20s 2025-05-13 23:10:05.396800 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.19s 2025-05-13 23:10:05.397269 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.15s 2025-05-13 23:10:05.397513 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.12s 2025-05-13 23:10:05.397896 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.12s 2025-05-13 23:10:05.398268 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.10s 2025-05-13 23:10:05.720398 | orchestrator | + osism apply bootstrap 2025-05-13 23:10:07.382769 | orchestrator | 2025-05-13 23:10:07 | INFO  | Task d07dd2f3-39d8-43c1-961d-05c09bf828d3 (bootstrap) was prepared for execution. 2025-05-13 23:10:07.382876 | orchestrator | 2025-05-13 23:10:07 | INFO  | It takes a moment until task d07dd2f3-39d8-43c1-961d-05c09bf828d3 (bootstrap) has been started and output is visible here. 2025-05-13 23:10:11.599936 | orchestrator | 2025-05-13 23:10:11.600672 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-05-13 23:10:11.603026 | orchestrator | 2025-05-13 23:10:11.604226 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-05-13 23:10:11.604730 | orchestrator | Tuesday 13 May 2025 23:10:11 +0000 (0:00:00.177) 0:00:00.177 *********** 2025-05-13 23:10:11.701788 | orchestrator | ok: [testbed-manager] 2025-05-13 23:10:11.730552 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:10:11.759977 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:10:11.786832 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:10:11.872619 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:10:11.873629 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:10:11.874905 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:10:11.875873 | orchestrator | 2025-05-13 23:10:11.877446 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-13 23:10:11.877863 | orchestrator | 2025-05-13 23:10:11.879164 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-13 23:10:11.880042 | orchestrator | Tuesday 13 May 2025 23:10:11 +0000 (0:00:00.278) 0:00:00.455 *********** 2025-05-13 23:10:15.564712 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:10:15.565305 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:10:15.568508 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:10:15.570078 | orchestrator | ok: [testbed-manager] 2025-05-13 23:10:15.570988 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:10:15.571637 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:10:15.572361 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:10:15.572811 | orchestrator | 2025-05-13 23:10:15.573692 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-05-13 23:10:15.574283 | orchestrator | 2025-05-13 23:10:15.574877 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-13 23:10:15.575496 | orchestrator | Tuesday 13 May 2025 23:10:15 +0000 (0:00:03.691) 0:00:04.147 *********** 2025-05-13 23:10:15.666269 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-05-13 23:10:15.666561 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-05-13 23:10:15.702516 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-05-13 23:10:15.702673 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-05-13 23:10:15.703688 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-05-13 23:10:15.726223 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-13 23:10:15.726405 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-05-13 23:10:15.767435 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-13 23:10:15.767595 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-05-13 23:10:15.767909 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-05-13 23:10:15.768258 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-05-13 23:10:16.052693 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:10:16.053596 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-13 23:10:16.053783 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-05-13 23:10:16.054108 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-13 23:10:16.054724 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-13 23:10:16.055644 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-13 23:10:16.056057 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-05-13 23:10:16.056524 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-13 23:10:16.057336 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-13 23:10:16.057739 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-05-13 23:10:16.057863 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-13 23:10:16.058154 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-13 23:10:16.058743 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-05-13 23:10:16.059312 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-13 23:10:16.059734 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-13 23:10:16.061378 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-13 23:10:16.063842 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-13 23:10:16.063875 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-13 23:10:16.063886 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-13 23:10:16.063897 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-13 23:10:16.063908 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-13 23:10:16.064395 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:10:16.064959 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-13 23:10:16.065357 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-13 23:10:16.066144 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-13 23:10:16.066814 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-13 23:10:16.066916 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-13 23:10:16.067356 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-13 23:10:16.067768 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-13 23:10:16.068317 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-13 23:10:16.068716 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:10:16.070005 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-13 23:10:16.071222 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-13 23:10:16.073455 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-13 23:10:16.074144 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:10:16.075069 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-13 23:10:16.075790 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-13 23:10:16.076597 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-13 23:10:16.077316 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:10:16.077956 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-13 23:10:16.078650 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-13 23:10:16.079280 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:10:16.079711 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-13 23:10:16.080150 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-13 23:10:16.080593 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:10:16.081456 | orchestrator | 2025-05-13 23:10:16.081914 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-05-13 23:10:16.082493 | orchestrator | 2025-05-13 23:10:16.083003 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-05-13 23:10:16.083571 | orchestrator | Tuesday 13 May 2025 23:10:16 +0000 (0:00:00.488) 0:00:04.636 *********** 2025-05-13 23:10:17.283924 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:10:17.284753 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:10:17.286150 | orchestrator | ok: [testbed-manager] 2025-05-13 23:10:17.287680 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:10:17.287712 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:10:17.288463 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:10:17.289218 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:10:17.289970 | orchestrator | 2025-05-13 23:10:17.290940 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-05-13 23:10:17.291526 | orchestrator | Tuesday 13 May 2025 23:10:17 +0000 (0:00:01.230) 0:00:05.866 *********** 2025-05-13 23:10:18.525620 | orchestrator | ok: [testbed-manager] 2025-05-13 23:10:18.525795 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:10:18.526968 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:10:18.528076 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:10:18.528353 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:10:18.528845 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:10:18.530597 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:10:18.531980 | orchestrator | 2025-05-13 23:10:18.533213 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-05-13 23:10:18.533605 | orchestrator | Tuesday 13 May 2025 23:10:18 +0000 (0:00:01.240) 0:00:07.107 *********** 2025-05-13 23:10:18.813457 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:10:18.814213 | orchestrator | 2025-05-13 23:10:18.815779 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-05-13 23:10:18.816377 | orchestrator | Tuesday 13 May 2025 23:10:18 +0000 (0:00:00.287) 0:00:07.394 *********** 2025-05-13 23:10:20.917672 | orchestrator | changed: [testbed-manager] 2025-05-13 23:10:20.918780 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:10:20.919071 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:10:20.920027 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:10:20.925085 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:10:20.925108 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:10:20.928027 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:10:20.928067 | orchestrator | 2025-05-13 23:10:20.928102 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-05-13 23:10:20.928112 | orchestrator | Tuesday 13 May 2025 23:10:20 +0000 (0:00:02.103) 0:00:09.498 *********** 2025-05-13 23:10:21.001873 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:10:21.223117 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:10:21.223268 | orchestrator | 2025-05-13 23:10:21.223357 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-05-13 23:10:21.223375 | orchestrator | Tuesday 13 May 2025 23:10:21 +0000 (0:00:00.307) 0:00:09.806 *********** 2025-05-13 23:10:22.307888 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:10:22.308657 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:10:22.312901 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:10:22.313700 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:10:22.314557 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:10:22.315311 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:10:22.315712 | orchestrator | 2025-05-13 23:10:22.316305 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-05-13 23:10:22.317031 | orchestrator | Tuesday 13 May 2025 23:10:22 +0000 (0:00:01.083) 0:00:10.889 *********** 2025-05-13 23:10:22.385610 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:10:22.956614 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:10:22.956796 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:10:22.956884 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:10:22.957117 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:10:22.958115 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:10:22.958537 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:10:22.958668 | orchestrator | 2025-05-13 23:10:22.959865 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-05-13 23:10:22.959903 | orchestrator | Tuesday 13 May 2025 23:10:22 +0000 (0:00:00.649) 0:00:11.539 *********** 2025-05-13 23:10:23.077354 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:10:23.099764 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:10:23.125976 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:10:23.395116 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:10:23.395787 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:10:23.396572 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:10:23.397583 | orchestrator | ok: [testbed-manager] 2025-05-13 23:10:23.398668 | orchestrator | 2025-05-13 23:10:23.399793 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-05-13 23:10:23.400807 | orchestrator | Tuesday 13 May 2025 23:10:23 +0000 (0:00:00.436) 0:00:11.975 *********** 2025-05-13 23:10:23.464912 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:10:23.492967 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:10:23.517057 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:10:23.544735 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:10:23.605249 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:10:23.605566 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:10:23.606577 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:10:23.609642 | orchestrator | 2025-05-13 23:10:23.613116 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-05-13 23:10:23.614000 | orchestrator | Tuesday 13 May 2025 23:10:23 +0000 (0:00:00.211) 0:00:12.187 *********** 2025-05-13 23:10:23.905100 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:10:23.905368 | orchestrator | 2025-05-13 23:10:23.906306 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-05-13 23:10:23.907050 | orchestrator | Tuesday 13 May 2025 23:10:23 +0000 (0:00:00.300) 0:00:12.487 *********** 2025-05-13 23:10:24.246890 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:10:24.249401 | orchestrator | 2025-05-13 23:10:24.252099 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-05-13 23:10:24.254732 | orchestrator | Tuesday 13 May 2025 23:10:24 +0000 (0:00:00.340) 0:00:12.828 *********** 2025-05-13 23:10:25.575341 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:10:25.575460 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:10:25.575894 | orchestrator | ok: [testbed-manager] 2025-05-13 23:10:25.576512 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:10:25.576834 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:10:25.577385 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:10:25.577820 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:10:25.578324 | orchestrator | 2025-05-13 23:10:25.578932 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-05-13 23:10:25.579551 | orchestrator | Tuesday 13 May 2025 23:10:25 +0000 (0:00:01.325) 0:00:14.153 *********** 2025-05-13 23:10:25.664023 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:10:25.693562 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:10:25.718780 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:10:25.748300 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:10:25.809139 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:10:25.809474 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:10:25.810292 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:10:25.810858 | orchestrator | 2025-05-13 23:10:25.811921 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-05-13 23:10:25.812652 | orchestrator | Tuesday 13 May 2025 23:10:25 +0000 (0:00:00.238) 0:00:14.392 *********** 2025-05-13 23:10:26.446675 | orchestrator | ok: [testbed-manager] 2025-05-13 23:10:26.447524 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:10:26.448314 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:10:26.449509 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:10:26.450011 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:10:26.450992 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:10:26.452068 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:10:26.453560 | orchestrator | 2025-05-13 23:10:26.454706 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-05-13 23:10:26.455511 | orchestrator | Tuesday 13 May 2025 23:10:26 +0000 (0:00:00.635) 0:00:15.027 *********** 2025-05-13 23:10:26.556534 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:10:26.592370 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:10:26.619977 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:10:26.712670 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:10:26.713635 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:10:26.714601 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:10:26.716472 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:10:26.717552 | orchestrator | 2025-05-13 23:10:26.719109 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-05-13 23:10:26.720746 | orchestrator | Tuesday 13 May 2025 23:10:26 +0000 (0:00:00.265) 0:00:15.293 *********** 2025-05-13 23:10:27.286568 | orchestrator | ok: [testbed-manager] 2025-05-13 23:10:27.286675 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:10:27.286865 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:10:27.289677 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:10:27.290517 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:10:27.291263 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:10:27.292393 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:10:27.293326 | orchestrator | 2025-05-13 23:10:27.293884 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-05-13 23:10:27.294637 | orchestrator | Tuesday 13 May 2025 23:10:27 +0000 (0:00:00.575) 0:00:15.868 *********** 2025-05-13 23:10:28.634077 | orchestrator | ok: [testbed-manager] 2025-05-13 23:10:28.634842 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:10:28.636545 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:10:28.636910 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:10:28.637968 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:10:28.638855 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:10:28.639790 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:10:28.640503 | orchestrator | 2025-05-13 23:10:28.641379 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-05-13 23:10:28.642531 | orchestrator | Tuesday 13 May 2025 23:10:28 +0000 (0:00:01.346) 0:00:17.214 *********** 2025-05-13 23:10:29.688662 | orchestrator | ok: [testbed-manager] 2025-05-13 23:10:29.688770 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:10:29.692001 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:10:29.692966 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:10:29.693673 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:10:29.694933 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:10:29.695615 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:10:29.697766 | orchestrator | 2025-05-13 23:10:29.698683 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-05-13 23:10:29.699898 | orchestrator | Tuesday 13 May 2025 23:10:29 +0000 (0:00:01.055) 0:00:18.270 *********** 2025-05-13 23:10:29.982381 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:10:29.982481 | orchestrator | 2025-05-13 23:10:29.982496 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-05-13 23:10:29.982510 | orchestrator | Tuesday 13 May 2025 23:10:29 +0000 (0:00:00.294) 0:00:18.564 *********** 2025-05-13 23:10:30.075454 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:10:31.288572 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:10:31.288829 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:10:31.290329 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:10:31.292603 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:10:31.292624 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:10:31.293315 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:10:31.294219 | orchestrator | 2025-05-13 23:10:31.295513 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-05-13 23:10:31.296955 | orchestrator | Tuesday 13 May 2025 23:10:31 +0000 (0:00:01.305) 0:00:19.869 *********** 2025-05-13 23:10:31.372595 | orchestrator | ok: [testbed-manager] 2025-05-13 23:10:31.398704 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:10:31.430361 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:10:31.455404 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:10:31.524988 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:10:31.527004 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:10:31.527806 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:10:31.528364 | orchestrator | 2025-05-13 23:10:31.528984 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-05-13 23:10:31.529525 | orchestrator | Tuesday 13 May 2025 23:10:31 +0000 (0:00:00.238) 0:00:20.108 *********** 2025-05-13 23:10:31.631975 | orchestrator | ok: [testbed-manager] 2025-05-13 23:10:31.649023 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:10:31.671270 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:10:31.746665 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:10:31.747659 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:10:31.749030 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:10:31.749729 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:10:31.750434 | orchestrator | 2025-05-13 23:10:31.751065 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-05-13 23:10:31.751906 | orchestrator | Tuesday 13 May 2025 23:10:31 +0000 (0:00:00.221) 0:00:20.329 *********** 2025-05-13 23:10:31.828821 | orchestrator | ok: [testbed-manager] 2025-05-13 23:10:31.851816 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:10:31.881903 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:10:31.905317 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:10:31.983158 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:10:31.983272 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:10:31.983439 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:10:31.983747 | orchestrator | 2025-05-13 23:10:31.984314 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-05-13 23:10:31.984752 | orchestrator | Tuesday 13 May 2025 23:10:31 +0000 (0:00:00.236) 0:00:20.566 *********** 2025-05-13 23:10:32.302903 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:10:32.304561 | orchestrator | 2025-05-13 23:10:32.306116 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-05-13 23:10:32.307242 | orchestrator | Tuesday 13 May 2025 23:10:32 +0000 (0:00:00.318) 0:00:20.884 *********** 2025-05-13 23:10:32.886991 | orchestrator | ok: [testbed-manager] 2025-05-13 23:10:32.888066 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:10:32.889816 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:10:32.891287 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:10:32.892290 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:10:32.893241 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:10:32.894346 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:10:32.895237 | orchestrator | 2025-05-13 23:10:32.895913 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-05-13 23:10:32.897493 | orchestrator | Tuesday 13 May 2025 23:10:32 +0000 (0:00:00.585) 0:00:21.469 *********** 2025-05-13 23:10:32.969053 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:10:33.002098 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:10:33.027375 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:10:33.061997 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:10:33.139316 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:10:33.141781 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:10:33.142330 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:10:33.143680 | orchestrator | 2025-05-13 23:10:33.144731 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-05-13 23:10:33.145640 | orchestrator | Tuesday 13 May 2025 23:10:33 +0000 (0:00:00.251) 0:00:21.721 *********** 2025-05-13 23:10:34.251606 | orchestrator | ok: [testbed-manager] 2025-05-13 23:10:34.251711 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:10:34.253125 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:10:34.255445 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:10:34.255582 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:10:34.256640 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:10:34.257406 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:10:34.258371 | orchestrator | 2025-05-13 23:10:34.259077 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-05-13 23:10:34.260007 | orchestrator | Tuesday 13 May 2025 23:10:34 +0000 (0:00:01.111) 0:00:22.832 *********** 2025-05-13 23:10:34.812845 | orchestrator | ok: [testbed-manager] 2025-05-13 23:10:34.814801 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:10:34.814863 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:10:34.814873 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:10:34.814881 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:10:34.814889 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:10:34.815870 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:10:34.816753 | orchestrator | 2025-05-13 23:10:34.817764 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-05-13 23:10:34.818296 | orchestrator | Tuesday 13 May 2025 23:10:34 +0000 (0:00:00.559) 0:00:23.391 *********** 2025-05-13 23:10:35.907038 | orchestrator | ok: [testbed-manager] 2025-05-13 23:10:35.907959 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:10:35.907995 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:10:35.908343 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:10:35.909091 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:10:35.909521 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:10:35.910009 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:10:35.910550 | orchestrator | 2025-05-13 23:10:35.910976 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-05-13 23:10:35.911493 | orchestrator | Tuesday 13 May 2025 23:10:35 +0000 (0:00:01.095) 0:00:24.487 *********** 2025-05-13 23:10:49.694636 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:10:49.694770 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:10:49.694786 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:10:49.694798 | orchestrator | changed: [testbed-manager] 2025-05-13 23:10:49.695050 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:10:49.695805 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:10:49.697642 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:10:49.698431 | orchestrator | 2025-05-13 23:10:49.699923 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-05-13 23:10:49.700863 | orchestrator | Tuesday 13 May 2025 23:10:49 +0000 (0:00:13.785) 0:00:38.273 *********** 2025-05-13 23:10:49.786188 | orchestrator | ok: [testbed-manager] 2025-05-13 23:10:49.816526 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:10:49.838643 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:10:49.868159 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:10:49.926331 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:10:49.927286 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:10:49.928200 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:10:49.929047 | orchestrator | 2025-05-13 23:10:49.929971 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-05-13 23:10:49.930453 | orchestrator | Tuesday 13 May 2025 23:10:49 +0000 (0:00:00.236) 0:00:38.509 *********** 2025-05-13 23:10:50.014146 | orchestrator | ok: [testbed-manager] 2025-05-13 23:10:50.049649 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:10:50.078699 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:10:50.111715 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:10:50.173877 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:10:50.174899 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:10:50.178108 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:10:50.178166 | orchestrator | 2025-05-13 23:10:50.178189 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-05-13 23:10:50.178209 | orchestrator | Tuesday 13 May 2025 23:10:50 +0000 (0:00:00.246) 0:00:38.756 *********** 2025-05-13 23:10:50.260594 | orchestrator | ok: [testbed-manager] 2025-05-13 23:10:50.290475 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:10:50.318502 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:10:50.345737 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:10:50.424652 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:10:50.425734 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:10:50.426519 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:10:50.427573 | orchestrator | 2025-05-13 23:10:50.428386 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-05-13 23:10:50.428900 | orchestrator | Tuesday 13 May 2025 23:10:50 +0000 (0:00:00.249) 0:00:39.006 *********** 2025-05-13 23:10:50.731626 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:10:50.732108 | orchestrator | 2025-05-13 23:10:50.733506 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-05-13 23:10:50.734289 | orchestrator | Tuesday 13 May 2025 23:10:50 +0000 (0:00:00.306) 0:00:39.313 *********** 2025-05-13 23:10:52.269909 | orchestrator | ok: [testbed-manager] 2025-05-13 23:10:52.270356 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:10:52.273492 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:10:52.274121 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:10:52.275278 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:10:52.276062 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:10:52.277208 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:10:52.278472 | orchestrator | 2025-05-13 23:10:52.279277 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-05-13 23:10:52.280173 | orchestrator | Tuesday 13 May 2025 23:10:52 +0000 (0:00:01.538) 0:00:40.851 *********** 2025-05-13 23:10:53.369673 | orchestrator | changed: [testbed-manager] 2025-05-13 23:10:53.371045 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:10:53.371954 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:10:53.372909 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:10:53.373962 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:10:53.375374 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:10:53.376402 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:10:53.377501 | orchestrator | 2025-05-13 23:10:53.378701 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-05-13 23:10:53.379770 | orchestrator | Tuesday 13 May 2025 23:10:53 +0000 (0:00:01.098) 0:00:41.950 *********** 2025-05-13 23:10:54.197474 | orchestrator | ok: [testbed-manager] 2025-05-13 23:10:54.197627 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:10:54.197898 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:10:54.198086 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:10:54.199454 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:10:54.200527 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:10:54.201456 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:10:54.202070 | orchestrator | 2025-05-13 23:10:54.203359 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-05-13 23:10:54.203869 | orchestrator | Tuesday 13 May 2025 23:10:54 +0000 (0:00:00.828) 0:00:42.778 *********** 2025-05-13 23:10:54.514344 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:10:54.514629 | orchestrator | 2025-05-13 23:10:54.516114 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-05-13 23:10:54.516358 | orchestrator | Tuesday 13 May 2025 23:10:54 +0000 (0:00:00.317) 0:00:43.095 *********** 2025-05-13 23:10:55.645535 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:10:55.649416 | orchestrator | changed: [testbed-manager] 2025-05-13 23:10:55.649465 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:10:55.649477 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:10:55.649488 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:10:55.652592 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:10:55.652712 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:10:55.654066 | orchestrator | 2025-05-13 23:10:55.654876 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-05-13 23:10:55.656024 | orchestrator | Tuesday 13 May 2025 23:10:55 +0000 (0:00:01.130) 0:00:44.226 *********** 2025-05-13 23:10:55.750306 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:10:55.772776 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:10:55.796680 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:10:55.946232 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:10:55.948970 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:10:55.949010 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:10:55.949022 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:10:55.949033 | orchestrator | 2025-05-13 23:10:55.949644 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-05-13 23:10:55.950474 | orchestrator | Tuesday 13 May 2025 23:10:55 +0000 (0:00:00.300) 0:00:44.526 *********** 2025-05-13 23:11:07.574756 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:11:07.574878 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:11:07.576195 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:11:07.576220 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:11:07.576231 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:11:07.576577 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:11:07.577423 | orchestrator | changed: [testbed-manager] 2025-05-13 23:11:07.578834 | orchestrator | 2025-05-13 23:11:07.578878 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-05-13 23:11:07.578892 | orchestrator | Tuesday 13 May 2025 23:11:07 +0000 (0:00:11.629) 0:00:56.155 *********** 2025-05-13 23:11:08.653857 | orchestrator | ok: [testbed-manager] 2025-05-13 23:11:08.653959 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:11:08.654756 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:11:08.654967 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:11:08.655608 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:11:08.657056 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:11:08.657604 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:11:08.658369 | orchestrator | 2025-05-13 23:11:08.659202 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-05-13 23:11:08.659798 | orchestrator | Tuesday 13 May 2025 23:11:08 +0000 (0:00:01.080) 0:00:57.236 *********** 2025-05-13 23:11:09.493155 | orchestrator | ok: [testbed-manager] 2025-05-13 23:11:09.493846 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:11:09.495382 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:11:09.496183 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:11:09.496720 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:11:09.497202 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:11:09.497755 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:11:09.498661 | orchestrator | 2025-05-13 23:11:09.499144 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-05-13 23:11:09.500110 | orchestrator | Tuesday 13 May 2025 23:11:09 +0000 (0:00:00.839) 0:00:58.075 *********** 2025-05-13 23:11:09.557674 | orchestrator | ok: [testbed-manager] 2025-05-13 23:11:09.578232 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:11:09.611878 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:11:09.628888 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:11:09.680914 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:11:09.681356 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:11:09.684796 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:11:09.684824 | orchestrator | 2025-05-13 23:11:09.684838 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-05-13 23:11:09.685462 | orchestrator | Tuesday 13 May 2025 23:11:09 +0000 (0:00:00.188) 0:00:58.264 *********** 2025-05-13 23:11:09.749601 | orchestrator | ok: [testbed-manager] 2025-05-13 23:11:09.773852 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:11:09.792946 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:11:09.817289 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:11:09.870789 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:11:09.870992 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:11:09.872257 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:11:09.875355 | orchestrator | 2025-05-13 23:11:09.878210 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-05-13 23:11:09.878272 | orchestrator | Tuesday 13 May 2025 23:11:09 +0000 (0:00:00.190) 0:00:58.454 *********** 2025-05-13 23:11:10.200840 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:11:10.201656 | orchestrator | 2025-05-13 23:11:10.202427 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-05-13 23:11:10.203090 | orchestrator | Tuesday 13 May 2025 23:11:10 +0000 (0:00:00.327) 0:00:58.781 *********** 2025-05-13 23:11:11.829914 | orchestrator | ok: [testbed-manager] 2025-05-13 23:11:11.830222 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:11:11.830735 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:11:11.831070 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:11:11.832823 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:11:11.833419 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:11:11.833554 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:11:11.834001 | orchestrator | 2025-05-13 23:11:11.834503 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-05-13 23:11:11.834768 | orchestrator | Tuesday 13 May 2025 23:11:11 +0000 (0:00:01.627) 0:01:00.409 *********** 2025-05-13 23:11:12.393439 | orchestrator | changed: [testbed-manager] 2025-05-13 23:11:12.393654 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:11:12.394718 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:11:12.394741 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:11:12.395416 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:11:12.395900 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:11:12.396450 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:11:12.397057 | orchestrator | 2025-05-13 23:11:12.397645 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-05-13 23:11:12.398227 | orchestrator | Tuesday 13 May 2025 23:11:12 +0000 (0:00:00.565) 0:01:00.974 *********** 2025-05-13 23:11:12.490548 | orchestrator | ok: [testbed-manager] 2025-05-13 23:11:12.521655 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:11:12.556176 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:11:12.586558 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:11:12.661357 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:11:12.661838 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:11:12.662502 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:11:12.663515 | orchestrator | 2025-05-13 23:11:12.664246 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-05-13 23:11:12.665174 | orchestrator | Tuesday 13 May 2025 23:11:12 +0000 (0:00:00.269) 0:01:01.244 *********** 2025-05-13 23:11:13.774689 | orchestrator | ok: [testbed-manager] 2025-05-13 23:11:13.774920 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:11:13.775968 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:11:13.776949 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:11:13.778447 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:11:13.778949 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:11:13.780110 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:11:13.780767 | orchestrator | 2025-05-13 23:11:13.781597 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-05-13 23:11:13.782552 | orchestrator | Tuesday 13 May 2025 23:11:13 +0000 (0:00:01.110) 0:01:02.354 *********** 2025-05-13 23:11:15.621570 | orchestrator | changed: [testbed-manager] 2025-05-13 23:11:15.622917 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:11:15.624023 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:11:15.626480 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:11:15.627284 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:11:15.630379 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:11:15.631803 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:11:15.632710 | orchestrator | 2025-05-13 23:11:15.633758 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-05-13 23:11:15.635136 | orchestrator | Tuesday 13 May 2025 23:11:15 +0000 (0:00:01.847) 0:01:04.202 *********** 2025-05-13 23:11:18.038281 | orchestrator | ok: [testbed-manager] 2025-05-13 23:11:18.038916 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:11:18.040030 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:11:18.041109 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:11:18.043248 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:11:18.044010 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:11:18.044803 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:11:18.045680 | orchestrator | 2025-05-13 23:11:18.046501 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-05-13 23:11:18.047304 | orchestrator | Tuesday 13 May 2025 23:11:18 +0000 (0:00:02.416) 0:01:06.618 *********** 2025-05-13 23:11:55.291726 | orchestrator | ok: [testbed-manager] 2025-05-13 23:11:55.291845 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:11:55.291860 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:11:55.291872 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:11:55.292171 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:11:55.295216 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:11:55.296145 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:11:55.298243 | orchestrator | 2025-05-13 23:11:55.299529 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-05-13 23:11:55.300906 | orchestrator | Tuesday 13 May 2025 23:11:55 +0000 (0:00:37.252) 0:01:43.871 *********** 2025-05-13 23:13:11.050509 | orchestrator | changed: [testbed-manager] 2025-05-13 23:13:11.050652 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:13:11.050671 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:13:11.050683 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:13:11.050694 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:13:11.050705 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:13:11.050716 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:13:11.050728 | orchestrator | 2025-05-13 23:13:11.050873 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-05-13 23:13:11.050938 | orchestrator | Tuesday 13 May 2025 23:13:11 +0000 (0:01:15.757) 0:02:59.628 *********** 2025-05-13 23:13:12.697645 | orchestrator | ok: [testbed-manager] 2025-05-13 23:13:12.699390 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:13:12.700550 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:13:12.701733 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:13:12.704079 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:13:12.704822 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:13:12.705369 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:13:12.705839 | orchestrator | 2025-05-13 23:13:12.707236 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-05-13 23:13:12.707283 | orchestrator | Tuesday 13 May 2025 23:13:12 +0000 (0:00:01.650) 0:03:01.279 *********** 2025-05-13 23:13:24.592339 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:13:24.592583 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:13:24.593389 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:13:24.595280 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:13:24.596060 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:13:24.599512 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:13:24.599724 | orchestrator | changed: [testbed-manager] 2025-05-13 23:13:24.600847 | orchestrator | 2025-05-13 23:13:24.601382 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-05-13 23:13:24.602104 | orchestrator | Tuesday 13 May 2025 23:13:24 +0000 (0:00:11.891) 0:03:13.171 *********** 2025-05-13 23:13:24.984247 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-05-13 23:13:24.984570 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-05-13 23:13:24.987976 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-05-13 23:13:24.988066 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-05-13 23:13:24.988722 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-05-13 23:13:24.989742 | orchestrator | 2025-05-13 23:13:24.990765 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-05-13 23:13:24.991343 | orchestrator | Tuesday 13 May 2025 23:13:24 +0000 (0:00:00.394) 0:03:13.565 *********** 2025-05-13 23:13:25.047911 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-13 23:13:25.067805 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:13:25.164970 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-13 23:13:25.684918 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:13:25.685790 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-13 23:13:25.688683 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:13:25.688712 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-13 23:13:25.688756 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:13:25.688768 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-13 23:13:25.689205 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-13 23:13:25.689614 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-13 23:13:25.691246 | orchestrator | 2025-05-13 23:13:25.691270 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-05-13 23:13:25.691297 | orchestrator | Tuesday 13 May 2025 23:13:25 +0000 (0:00:00.699) 0:03:14.264 *********** 2025-05-13 23:13:25.727790 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-13 23:13:25.727880 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-13 23:13:25.728373 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-13 23:13:25.729189 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-13 23:13:25.767227 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-13 23:13:25.767324 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-13 23:13:25.767338 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-13 23:13:25.767351 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-13 23:13:25.767511 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-13 23:13:25.768518 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-13 23:13:25.792138 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:13:25.875666 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-13 23:13:25.875763 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-13 23:13:25.876503 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-13 23:13:25.877261 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-13 23:13:25.877868 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-13 23:13:25.879070 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-13 23:13:25.879542 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-13 23:13:30.387736 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-13 23:13:30.389892 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-13 23:13:30.392507 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-13 23:13:30.393129 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-13 23:13:30.394214 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-13 23:13:30.394986 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-13 23:13:30.396988 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-13 23:13:30.398813 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:13:30.399275 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-13 23:13:30.400680 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-13 23:13:30.401978 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-13 23:13:30.402940 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-13 23:13:30.403677 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-13 23:13:30.404486 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-13 23:13:30.405371 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-13 23:13:30.405494 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:13:30.406320 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-13 23:13:30.408587 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-13 23:13:30.409372 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-13 23:13:30.410503 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-13 23:13:30.411757 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-13 23:13:30.412630 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-13 23:13:30.413558 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-13 23:13:30.414481 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-13 23:13:30.415256 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-13 23:13:30.417277 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:13:30.420009 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-05-13 23:13:30.420064 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-05-13 23:13:30.421363 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-05-13 23:13:30.422261 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-05-13 23:13:30.422853 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-05-13 23:13:30.423556 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-05-13 23:13:30.424256 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-05-13 23:13:30.424868 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-05-13 23:13:30.425465 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-05-13 23:13:30.426393 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-05-13 23:13:30.426997 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-05-13 23:13:30.427853 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-05-13 23:13:30.428520 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-05-13 23:13:30.429173 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-05-13 23:13:30.429779 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-05-13 23:13:30.430419 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-05-13 23:13:30.432788 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-05-13 23:13:30.432892 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-05-13 23:13:30.432907 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-05-13 23:13:30.432926 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-05-13 23:13:30.432945 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-05-13 23:13:30.433224 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-05-13 23:13:30.433687 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-05-13 23:13:30.434297 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-05-13 23:13:30.434681 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-05-13 23:13:30.435305 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-05-13 23:13:30.435864 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-05-13 23:13:30.436395 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-05-13 23:13:30.436879 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-05-13 23:13:30.437719 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-05-13 23:13:30.438007 | orchestrator | 2025-05-13 23:13:30.438567 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-05-13 23:13:30.439164 | orchestrator | Tuesday 13 May 2025 23:13:30 +0000 (0:00:04.703) 0:03:18.968 *********** 2025-05-13 23:13:30.968237 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-13 23:13:30.969467 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-13 23:13:30.970210 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-13 23:13:30.971107 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-13 23:13:30.971981 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-13 23:13:30.972477 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-13 23:13:30.973119 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-13 23:13:30.973726 | orchestrator | 2025-05-13 23:13:30.974546 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-05-13 23:13:30.975026 | orchestrator | Tuesday 13 May 2025 23:13:30 +0000 (0:00:00.581) 0:03:19.549 *********** 2025-05-13 23:13:31.033784 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-13 23:13:31.075546 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-13 23:13:31.075758 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:13:31.076747 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-13 23:13:31.100978 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:13:31.132042 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:13:31.133190 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-13 23:13:31.156656 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:13:31.574446 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-05-13 23:13:31.574549 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-05-13 23:13:31.576206 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-05-13 23:13:31.577072 | orchestrator | 2025-05-13 23:13:31.578115 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-05-13 23:13:31.579389 | orchestrator | Tuesday 13 May 2025 23:13:31 +0000 (0:00:00.605) 0:03:20.155 *********** 2025-05-13 23:13:31.631508 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-13 23:13:31.661577 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:13:31.661725 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-13 23:13:31.704250 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:13:31.705552 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-13 23:13:31.706803 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-13 23:13:31.727678 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:13:31.765338 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:13:32.284320 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-05-13 23:13:32.287732 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-05-13 23:13:32.287775 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-05-13 23:13:32.287788 | orchestrator | 2025-05-13 23:13:32.288850 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-05-13 23:13:32.289937 | orchestrator | Tuesday 13 May 2025 23:13:32 +0000 (0:00:00.710) 0:03:20.866 *********** 2025-05-13 23:13:32.335936 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:13:32.391650 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:13:32.420383 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:13:32.449324 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:13:32.575997 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:13:32.579589 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:13:32.580263 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:13:32.580997 | orchestrator | 2025-05-13 23:13:32.581852 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-05-13 23:13:32.582901 | orchestrator | Tuesday 13 May 2025 23:13:32 +0000 (0:00:00.290) 0:03:21.157 *********** 2025-05-13 23:13:38.562819 | orchestrator | ok: [testbed-manager] 2025-05-13 23:13:38.562932 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:13:38.563899 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:13:38.564935 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:13:38.566560 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:13:38.567119 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:13:38.568089 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:13:38.569069 | orchestrator | 2025-05-13 23:13:38.570239 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-05-13 23:13:38.571112 | orchestrator | Tuesday 13 May 2025 23:13:38 +0000 (0:00:05.987) 0:03:27.144 *********** 2025-05-13 23:13:38.637152 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-05-13 23:13:38.676509 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:13:38.676814 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-05-13 23:13:38.677416 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-05-13 23:13:38.710866 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:13:38.749690 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:13:38.750162 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-05-13 23:13:38.751482 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-05-13 23:13:38.791567 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:13:38.875942 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:13:38.876130 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-05-13 23:13:38.877255 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:13:38.877868 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-05-13 23:13:38.878539 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:13:38.879314 | orchestrator | 2025-05-13 23:13:38.880097 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-05-13 23:13:38.880940 | orchestrator | Tuesday 13 May 2025 23:13:38 +0000 (0:00:00.313) 0:03:27.457 *********** 2025-05-13 23:13:39.922743 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-05-13 23:13:39.922966 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-05-13 23:13:39.923847 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-05-13 23:13:39.924705 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-05-13 23:13:39.925840 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-05-13 23:13:39.926466 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-05-13 23:13:39.927041 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-05-13 23:13:39.927691 | orchestrator | 2025-05-13 23:13:39.928449 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-05-13 23:13:39.929112 | orchestrator | Tuesday 13 May 2025 23:13:39 +0000 (0:00:01.043) 0:03:28.501 *********** 2025-05-13 23:13:40.443962 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:13:40.444336 | orchestrator | 2025-05-13 23:13:40.447881 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-05-13 23:13:40.448608 | orchestrator | Tuesday 13 May 2025 23:13:40 +0000 (0:00:00.524) 0:03:29.025 *********** 2025-05-13 23:13:41.654644 | orchestrator | ok: [testbed-manager] 2025-05-13 23:13:41.654750 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:13:41.654942 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:13:41.655566 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:13:41.656224 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:13:41.657339 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:13:41.657737 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:13:41.658187 | orchestrator | 2025-05-13 23:13:41.658729 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-05-13 23:13:41.659563 | orchestrator | Tuesday 13 May 2025 23:13:41 +0000 (0:00:01.208) 0:03:30.234 *********** 2025-05-13 23:13:42.264401 | orchestrator | ok: [testbed-manager] 2025-05-13 23:13:42.265624 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:13:42.266777 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:13:42.267687 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:13:42.268823 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:13:42.269797 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:13:42.270662 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:13:42.271104 | orchestrator | 2025-05-13 23:13:42.272310 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-05-13 23:13:42.272937 | orchestrator | Tuesday 13 May 2025 23:13:42 +0000 (0:00:00.608) 0:03:30.843 *********** 2025-05-13 23:13:42.937309 | orchestrator | changed: [testbed-manager] 2025-05-13 23:13:42.937584 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:13:42.937723 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:13:42.938825 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:13:42.939370 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:13:42.939753 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:13:42.940564 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:13:42.941343 | orchestrator | 2025-05-13 23:13:42.941608 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-05-13 23:13:42.941751 | orchestrator | Tuesday 13 May 2025 23:13:42 +0000 (0:00:00.675) 0:03:31.518 *********** 2025-05-13 23:13:43.507699 | orchestrator | ok: [testbed-manager] 2025-05-13 23:13:43.509591 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:13:43.509884 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:13:43.511028 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:13:43.512137 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:13:43.512829 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:13:43.513398 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:13:43.514184 | orchestrator | 2025-05-13 23:13:43.514734 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-05-13 23:13:43.515896 | orchestrator | Tuesday 13 May 2025 23:13:43 +0000 (0:00:00.570) 0:03:32.089 *********** 2025-05-13 23:13:44.461680 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1747176014.428764, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-13 23:13:44.462161 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1747176062.1539643, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-13 23:13:44.463600 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1747176061.8346756, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-13 23:13:44.464651 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1747176054.6413553, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-13 23:13:44.466120 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1747176071.1635163, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-13 23:13:44.466428 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1747176059.706632, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-13 23:13:44.467861 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1747176055.8321533, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-13 23:13:44.468845 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1747176043.2522757, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-13 23:13:44.469605 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1747175990.811309, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-13 23:13:44.470575 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1747175983.4200332, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-13 23:13:44.471179 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1747175979.7618873, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-13 23:13:44.471925 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1747175978.9752197, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-13 23:13:44.472454 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1747175974.869847, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-13 23:13:44.472962 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1747175971.1985924, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-13 23:13:44.474246 | orchestrator | 2025-05-13 23:13:44.474913 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-05-13 23:13:44.475705 | orchestrator | Tuesday 13 May 2025 23:13:44 +0000 (0:00:00.953) 0:03:33.043 *********** 2025-05-13 23:13:45.593155 | orchestrator | changed: [testbed-manager] 2025-05-13 23:13:45.594757 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:13:45.596034 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:13:45.597336 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:13:45.598067 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:13:45.599028 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:13:45.599915 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:13:45.600333 | orchestrator | 2025-05-13 23:13:45.600941 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-05-13 23:13:45.601566 | orchestrator | Tuesday 13 May 2025 23:13:45 +0000 (0:00:01.131) 0:03:34.174 *********** 2025-05-13 23:13:46.675468 | orchestrator | changed: [testbed-manager] 2025-05-13 23:13:46.675578 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:13:46.675646 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:13:46.676097 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:13:46.676663 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:13:46.677366 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:13:46.677839 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:13:46.678383 | orchestrator | 2025-05-13 23:13:46.678818 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-05-13 23:13:46.679458 | orchestrator | Tuesday 13 May 2025 23:13:46 +0000 (0:00:01.081) 0:03:35.256 *********** 2025-05-13 23:13:47.795320 | orchestrator | changed: [testbed-manager] 2025-05-13 23:13:47.795433 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:13:47.795522 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:13:47.796264 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:13:47.796738 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:13:47.797527 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:13:47.798224 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:13:47.798856 | orchestrator | 2025-05-13 23:13:47.799696 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-05-13 23:13:47.800126 | orchestrator | Tuesday 13 May 2025 23:13:47 +0000 (0:00:01.116) 0:03:36.373 *********** 2025-05-13 23:13:47.859368 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:13:47.921035 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:13:47.947308 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:13:47.979569 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:13:48.021953 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:13:48.022186 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:13:48.022340 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:13:48.023032 | orchestrator | 2025-05-13 23:13:48.023532 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-05-13 23:13:48.023758 | orchestrator | Tuesday 13 May 2025 23:13:48 +0000 (0:00:00.232) 0:03:36.605 *********** 2025-05-13 23:13:48.761105 | orchestrator | ok: [testbed-manager] 2025-05-13 23:13:48.762747 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:13:48.764351 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:13:48.765690 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:13:48.766615 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:13:48.767818 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:13:48.768435 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:13:48.769721 | orchestrator | 2025-05-13 23:13:48.770101 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-05-13 23:13:48.770484 | orchestrator | Tuesday 13 May 2025 23:13:48 +0000 (0:00:00.734) 0:03:37.340 *********** 2025-05-13 23:13:49.161951 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:13:49.162195 | orchestrator | 2025-05-13 23:13:49.163117 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-05-13 23:13:49.163674 | orchestrator | Tuesday 13 May 2025 23:13:49 +0000 (0:00:00.404) 0:03:37.744 *********** 2025-05-13 23:13:56.792484 | orchestrator | ok: [testbed-manager] 2025-05-13 23:13:56.794853 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:13:56.794904 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:13:56.795115 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:13:56.796715 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:13:56.797056 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:13:56.798447 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:13:56.798474 | orchestrator | 2025-05-13 23:13:56.799144 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-05-13 23:13:56.800455 | orchestrator | Tuesday 13 May 2025 23:13:56 +0000 (0:00:07.629) 0:03:45.374 *********** 2025-05-13 23:13:58.011637 | orchestrator | ok: [testbed-manager] 2025-05-13 23:13:58.012293 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:13:58.013697 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:13:58.014934 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:13:58.016341 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:13:58.016805 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:13:58.017622 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:13:58.018701 | orchestrator | 2025-05-13 23:13:58.019384 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-05-13 23:13:58.019899 | orchestrator | Tuesday 13 May 2025 23:13:58 +0000 (0:00:01.216) 0:03:46.590 *********** 2025-05-13 23:13:59.072363 | orchestrator | ok: [testbed-manager] 2025-05-13 23:13:59.072910 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:13:59.073419 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:13:59.073893 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:13:59.077965 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:13:59.077992 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:13:59.078001 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:13:59.078009 | orchestrator | 2025-05-13 23:13:59.078055 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-05-13 23:13:59.078066 | orchestrator | Tuesday 13 May 2025 23:13:59 +0000 (0:00:01.062) 0:03:47.653 *********** 2025-05-13 23:13:59.636336 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:13:59.636440 | orchestrator | 2025-05-13 23:13:59.637458 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-05-13 23:13:59.637845 | orchestrator | Tuesday 13 May 2025 23:13:59 +0000 (0:00:00.564) 0:03:48.219 *********** 2025-05-13 23:14:07.975731 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:14:07.976342 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:14:07.978459 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:14:07.979768 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:14:07.979912 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:14:07.980861 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:14:07.981425 | orchestrator | changed: [testbed-manager] 2025-05-13 23:14:07.982431 | orchestrator | 2025-05-13 23:14:07.982549 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-05-13 23:14:07.983209 | orchestrator | Tuesday 13 May 2025 23:14:07 +0000 (0:00:08.337) 0:03:56.556 *********** 2025-05-13 23:14:08.576601 | orchestrator | changed: [testbed-manager] 2025-05-13 23:14:08.577531 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:14:08.579765 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:14:08.579816 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:14:08.579826 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:14:08.579835 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:14:08.579844 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:14:08.580333 | orchestrator | 2025-05-13 23:14:08.580942 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-05-13 23:14:08.582309 | orchestrator | Tuesday 13 May 2025 23:14:08 +0000 (0:00:00.600) 0:03:57.157 *********** 2025-05-13 23:14:09.741387 | orchestrator | changed: [testbed-manager] 2025-05-13 23:14:09.742753 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:14:09.745948 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:14:09.745978 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:14:09.745990 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:14:09.746001 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:14:09.746686 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:14:09.747915 | orchestrator | 2025-05-13 23:14:09.748422 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-05-13 23:14:09.748935 | orchestrator | Tuesday 13 May 2025 23:14:09 +0000 (0:00:01.166) 0:03:58.323 *********** 2025-05-13 23:14:10.824919 | orchestrator | changed: [testbed-manager] 2025-05-13 23:14:10.825695 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:14:10.826791 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:14:10.827589 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:14:10.828326 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:14:10.829238 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:14:10.830281 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:14:10.830450 | orchestrator | 2025-05-13 23:14:10.830862 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-05-13 23:14:10.831624 | orchestrator | Tuesday 13 May 2025 23:14:10 +0000 (0:00:01.082) 0:03:59.406 *********** 2025-05-13 23:14:10.930292 | orchestrator | ok: [testbed-manager] 2025-05-13 23:14:10.968035 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:14:11.046671 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:14:11.082901 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:14:11.154881 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:14:11.156065 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:14:11.158110 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:14:11.158188 | orchestrator | 2025-05-13 23:14:11.158598 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-05-13 23:14:11.159320 | orchestrator | Tuesday 13 May 2025 23:14:11 +0000 (0:00:00.331) 0:03:59.737 *********** 2025-05-13 23:14:11.270434 | orchestrator | ok: [testbed-manager] 2025-05-13 23:14:11.315396 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:14:11.344229 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:14:11.378357 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:14:11.472759 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:14:11.473277 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:14:11.474280 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:14:11.475394 | orchestrator | 2025-05-13 23:14:11.475827 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-05-13 23:14:11.476445 | orchestrator | Tuesday 13 May 2025 23:14:11 +0000 (0:00:00.318) 0:04:00.055 *********** 2025-05-13 23:14:11.555720 | orchestrator | ok: [testbed-manager] 2025-05-13 23:14:11.632766 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:14:11.673194 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:14:11.718696 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:14:11.801113 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:14:11.801412 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:14:11.802418 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:14:11.802849 | orchestrator | 2025-05-13 23:14:11.802870 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-05-13 23:14:11.803438 | orchestrator | Tuesday 13 May 2025 23:14:11 +0000 (0:00:00.327) 0:04:00.383 *********** 2025-05-13 23:14:17.591147 | orchestrator | ok: [testbed-manager] 2025-05-13 23:14:17.591314 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:14:17.592361 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:14:17.593562 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:14:17.594219 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:14:17.595411 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:14:17.595891 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:14:17.596706 | orchestrator | 2025-05-13 23:14:17.597571 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-05-13 23:14:17.597959 | orchestrator | Tuesday 13 May 2025 23:14:17 +0000 (0:00:05.788) 0:04:06.171 *********** 2025-05-13 23:14:18.058341 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:14:18.058710 | orchestrator | 2025-05-13 23:14:18.060230 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-05-13 23:14:18.060276 | orchestrator | Tuesday 13 May 2025 23:14:18 +0000 (0:00:00.464) 0:04:06.636 *********** 2025-05-13 23:14:18.101474 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-05-13 23:14:18.101683 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-05-13 23:14:18.142334 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:14:18.142491 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-05-13 23:14:18.185045 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-05-13 23:14:18.185416 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-05-13 23:14:18.225329 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:14:18.225932 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-05-13 23:14:18.227651 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-05-13 23:14:18.283728 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:14:18.283881 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-05-13 23:14:18.284487 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-05-13 23:14:18.285107 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-05-13 23:14:18.338381 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:14:18.340849 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-05-13 23:14:18.341503 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-05-13 23:14:18.431860 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:14:18.431962 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:14:18.432740 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-05-13 23:14:18.433306 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-05-13 23:14:18.434116 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:14:18.434629 | orchestrator | 2025-05-13 23:14:18.434821 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-05-13 23:14:18.435278 | orchestrator | Tuesday 13 May 2025 23:14:18 +0000 (0:00:00.378) 0:04:07.015 *********** 2025-05-13 23:14:18.859336 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:14:18.859439 | orchestrator | 2025-05-13 23:14:18.859775 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-05-13 23:14:18.860464 | orchestrator | Tuesday 13 May 2025 23:14:18 +0000 (0:00:00.424) 0:04:07.439 *********** 2025-05-13 23:14:18.937338 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-05-13 23:14:18.975056 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:14:18.975505 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-05-13 23:14:18.975605 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-05-13 23:14:19.015626 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:14:19.060780 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:14:19.060949 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-05-13 23:14:19.061486 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-05-13 23:14:19.096735 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:14:19.096926 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-05-13 23:14:19.167580 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:14:19.167676 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:14:19.167740 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-05-13 23:14:19.168527 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:14:19.169074 | orchestrator | 2025-05-13 23:14:19.169272 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-05-13 23:14:19.170536 | orchestrator | Tuesday 13 May 2025 23:14:19 +0000 (0:00:00.311) 0:04:07.750 *********** 2025-05-13 23:14:19.711717 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:14:19.711819 | orchestrator | 2025-05-13 23:14:19.715867 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-05-13 23:14:19.715915 | orchestrator | Tuesday 13 May 2025 23:14:19 +0000 (0:00:00.542) 0:04:08.292 *********** 2025-05-13 23:14:53.450844 | orchestrator | changed: [testbed-manager] 2025-05-13 23:14:53.451043 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:14:53.451085 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:14:53.451889 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:14:53.451968 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:14:53.451997 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:14:53.452008 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:14:53.452019 | orchestrator | 2025-05-13 23:14:53.452320 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-05-13 23:14:53.452822 | orchestrator | Tuesday 13 May 2025 23:14:53 +0000 (0:00:33.736) 0:04:42.029 *********** 2025-05-13 23:15:01.182275 | orchestrator | changed: [testbed-manager] 2025-05-13 23:15:01.183235 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:15:01.186910 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:15:01.189158 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:15:01.189799 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:15:01.190504 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:15:01.191459 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:15:01.191917 | orchestrator | 2025-05-13 23:15:01.192860 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-05-13 23:15:01.193244 | orchestrator | Tuesday 13 May 2025 23:15:01 +0000 (0:00:07.734) 0:04:49.763 *********** 2025-05-13 23:15:08.324928 | orchestrator | changed: [testbed-manager] 2025-05-13 23:15:08.325237 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:15:08.326670 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:15:08.328577 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:15:08.329609 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:15:08.329790 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:15:08.330082 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:15:08.330812 | orchestrator | 2025-05-13 23:15:08.331989 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-05-13 23:15:08.332023 | orchestrator | Tuesday 13 May 2025 23:15:08 +0000 (0:00:07.142) 0:04:56.906 *********** 2025-05-13 23:15:09.920881 | orchestrator | ok: [testbed-manager] 2025-05-13 23:15:09.920986 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:15:09.921059 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:15:09.921524 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:15:09.922176 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:15:09.922731 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:15:09.923080 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:15:09.924041 | orchestrator | 2025-05-13 23:15:09.924607 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-05-13 23:15:09.924899 | orchestrator | Tuesday 13 May 2025 23:15:09 +0000 (0:00:01.594) 0:04:58.501 *********** 2025-05-13 23:15:15.308675 | orchestrator | changed: [testbed-manager] 2025-05-13 23:15:15.311357 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:15:15.312769 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:15:15.313453 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:15:15.314207 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:15:15.315029 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:15:15.315794 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:15:15.316261 | orchestrator | 2025-05-13 23:15:15.317284 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-05-13 23:15:15.318094 | orchestrator | Tuesday 13 May 2025 23:15:15 +0000 (0:00:05.389) 0:05:03.890 *********** 2025-05-13 23:15:15.797269 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:15:15.798381 | orchestrator | 2025-05-13 23:15:15.799341 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-05-13 23:15:15.799765 | orchestrator | Tuesday 13 May 2025 23:15:15 +0000 (0:00:00.488) 0:05:04.379 *********** 2025-05-13 23:15:16.526680 | orchestrator | changed: [testbed-manager] 2025-05-13 23:15:16.526988 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:15:16.528595 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:15:16.529315 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:15:16.529871 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:15:16.531432 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:15:16.534142 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:15:16.534836 | orchestrator | 2025-05-13 23:15:16.536052 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-05-13 23:15:16.537584 | orchestrator | Tuesday 13 May 2025 23:15:16 +0000 (0:00:00.727) 0:05:05.106 *********** 2025-05-13 23:15:18.141661 | orchestrator | ok: [testbed-manager] 2025-05-13 23:15:18.141831 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:15:18.141935 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:15:18.141951 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:15:18.142562 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:15:18.143394 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:15:18.143588 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:15:18.143988 | orchestrator | 2025-05-13 23:15:18.144633 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-05-13 23:15:18.144877 | orchestrator | Tuesday 13 May 2025 23:15:18 +0000 (0:00:01.615) 0:05:06.722 *********** 2025-05-13 23:15:18.938912 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:15:18.939017 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:15:18.939032 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:15:18.939416 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:15:18.940686 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:15:18.941059 | orchestrator | changed: [testbed-manager] 2025-05-13 23:15:18.941781 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:15:18.942451 | orchestrator | 2025-05-13 23:15:18.942839 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-05-13 23:15:18.943686 | orchestrator | Tuesday 13 May 2025 23:15:18 +0000 (0:00:00.797) 0:05:07.519 *********** 2025-05-13 23:15:19.006305 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:15:19.040505 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:15:19.080031 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:15:19.112946 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:15:19.147019 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:15:19.214980 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:15:19.215543 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:15:19.216869 | orchestrator | 2025-05-13 23:15:19.218091 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-05-13 23:15:19.219516 | orchestrator | Tuesday 13 May 2025 23:15:19 +0000 (0:00:00.278) 0:05:07.797 *********** 2025-05-13 23:15:19.313155 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:15:19.347940 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:15:19.382277 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:15:19.416957 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:15:19.454244 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:15:19.655672 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:15:19.656803 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:15:19.657660 | orchestrator | 2025-05-13 23:15:19.659298 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-05-13 23:15:19.660377 | orchestrator | Tuesday 13 May 2025 23:15:19 +0000 (0:00:00.440) 0:05:08.238 *********** 2025-05-13 23:15:19.767134 | orchestrator | ok: [testbed-manager] 2025-05-13 23:15:19.803198 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:15:19.841951 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:15:19.882289 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:15:19.953724 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:15:19.954300 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:15:19.955038 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:15:19.955899 | orchestrator | 2025-05-13 23:15:19.956672 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-05-13 23:15:19.957595 | orchestrator | Tuesday 13 May 2025 23:15:19 +0000 (0:00:00.298) 0:05:08.536 *********** 2025-05-13 23:15:20.058724 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:15:20.116821 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:15:20.152563 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:15:20.188012 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:15:20.257212 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:15:20.257652 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:15:20.258376 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:15:20.261970 | orchestrator | 2025-05-13 23:15:20.262079 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-05-13 23:15:20.262110 | orchestrator | Tuesday 13 May 2025 23:15:20 +0000 (0:00:00.302) 0:05:08.839 *********** 2025-05-13 23:15:20.367221 | orchestrator | ok: [testbed-manager] 2025-05-13 23:15:20.407868 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:15:20.441910 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:15:20.473851 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:15:20.545680 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:15:20.546577 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:15:20.547612 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:15:20.548675 | orchestrator | 2025-05-13 23:15:20.549758 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-05-13 23:15:20.549798 | orchestrator | Tuesday 13 May 2025 23:15:20 +0000 (0:00:00.289) 0:05:09.128 *********** 2025-05-13 23:15:20.798184 | orchestrator | ok: [testbed-manager] =>  2025-05-13 23:15:20.798336 | orchestrator |  docker_version: 5:27.5.1 2025-05-13 23:15:20.832926 | orchestrator | ok: [testbed-node-0] =>  2025-05-13 23:15:20.833157 | orchestrator |  docker_version: 5:27.5.1 2025-05-13 23:15:20.867532 | orchestrator | ok: [testbed-node-1] =>  2025-05-13 23:15:20.867784 | orchestrator |  docker_version: 5:27.5.1 2025-05-13 23:15:20.901421 | orchestrator | ok: [testbed-node-2] =>  2025-05-13 23:15:20.901922 | orchestrator |  docker_version: 5:27.5.1 2025-05-13 23:15:20.962665 | orchestrator | ok: [testbed-node-3] =>  2025-05-13 23:15:20.963626 | orchestrator |  docker_version: 5:27.5.1 2025-05-13 23:15:20.963874 | orchestrator | ok: [testbed-node-4] =>  2025-05-13 23:15:20.965619 | orchestrator |  docker_version: 5:27.5.1 2025-05-13 23:15:20.965736 | orchestrator | ok: [testbed-node-5] =>  2025-05-13 23:15:20.965820 | orchestrator |  docker_version: 5:27.5.1 2025-05-13 23:15:20.966172 | orchestrator | 2025-05-13 23:15:20.966776 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-05-13 23:15:20.967050 | orchestrator | Tuesday 13 May 2025 23:15:20 +0000 (0:00:00.416) 0:05:09.544 *********** 2025-05-13 23:15:21.066184 | orchestrator | ok: [testbed-manager] =>  2025-05-13 23:15:21.066597 | orchestrator |  docker_cli_version: 5:27.5.1 2025-05-13 23:15:21.102793 | orchestrator | ok: [testbed-node-0] =>  2025-05-13 23:15:21.103104 | orchestrator |  docker_cli_version: 5:27.5.1 2025-05-13 23:15:21.148332 | orchestrator | ok: [testbed-node-1] =>  2025-05-13 23:15:21.148575 | orchestrator |  docker_cli_version: 5:27.5.1 2025-05-13 23:15:21.182729 | orchestrator | ok: [testbed-node-2] =>  2025-05-13 23:15:21.182927 | orchestrator |  docker_cli_version: 5:27.5.1 2025-05-13 23:15:21.243548 | orchestrator | ok: [testbed-node-3] =>  2025-05-13 23:15:21.244085 | orchestrator |  docker_cli_version: 5:27.5.1 2025-05-13 23:15:21.245273 | orchestrator | ok: [testbed-node-4] =>  2025-05-13 23:15:21.246763 | orchestrator |  docker_cli_version: 5:27.5.1 2025-05-13 23:15:21.248612 | orchestrator | ok: [testbed-node-5] =>  2025-05-13 23:15:21.249368 | orchestrator |  docker_cli_version: 5:27.5.1 2025-05-13 23:15:21.250177 | orchestrator | 2025-05-13 23:15:21.251429 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-05-13 23:15:21.251922 | orchestrator | Tuesday 13 May 2025 23:15:21 +0000 (0:00:00.282) 0:05:09.827 *********** 2025-05-13 23:15:21.363750 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:15:21.400834 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:15:21.436883 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:15:21.467934 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:15:21.530349 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:15:21.531634 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:15:21.535307 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:15:21.535377 | orchestrator | 2025-05-13 23:15:21.535400 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-05-13 23:15:21.535422 | orchestrator | Tuesday 13 May 2025 23:15:21 +0000 (0:00:00.286) 0:05:10.113 *********** 2025-05-13 23:15:21.627095 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:15:21.660426 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:15:21.694104 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:15:21.744312 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:15:21.820815 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:15:21.821287 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:15:21.823025 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:15:21.823201 | orchestrator | 2025-05-13 23:15:21.824263 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-05-13 23:15:21.825133 | orchestrator | Tuesday 13 May 2025 23:15:21 +0000 (0:00:00.289) 0:05:10.402 *********** 2025-05-13 23:15:22.252544 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:15:22.252806 | orchestrator | 2025-05-13 23:15:22.253507 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-05-13 23:15:22.254492 | orchestrator | Tuesday 13 May 2025 23:15:22 +0000 (0:00:00.430) 0:05:10.833 *********** 2025-05-13 23:15:23.036185 | orchestrator | ok: [testbed-manager] 2025-05-13 23:15:23.037055 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:15:23.038306 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:15:23.039282 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:15:23.040633 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:15:23.041381 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:15:23.042452 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:15:23.042994 | orchestrator | 2025-05-13 23:15:23.043819 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-05-13 23:15:23.044571 | orchestrator | Tuesday 13 May 2025 23:15:23 +0000 (0:00:00.784) 0:05:11.617 *********** 2025-05-13 23:15:25.610151 | orchestrator | ok: [testbed-manager] 2025-05-13 23:15:25.616145 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:15:25.616229 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:15:25.616243 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:15:25.616255 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:15:25.618190 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:15:25.619603 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:15:25.620034 | orchestrator | 2025-05-13 23:15:25.621349 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-05-13 23:15:25.622735 | orchestrator | Tuesday 13 May 2025 23:15:25 +0000 (0:00:02.573) 0:05:14.190 *********** 2025-05-13 23:15:25.674416 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-05-13 23:15:25.674572 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-05-13 23:15:25.897289 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-05-13 23:15:25.901181 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-05-13 23:15:25.901928 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-05-13 23:15:25.903078 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-05-13 23:15:25.994978 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:15:25.995075 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-05-13 23:15:25.995616 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-05-13 23:15:25.995909 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-05-13 23:15:26.063825 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:15:26.064547 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-05-13 23:15:26.064577 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-05-13 23:15:26.147014 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:15:26.147187 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-05-13 23:15:26.147688 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-05-13 23:15:26.148095 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-05-13 23:15:26.148569 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-05-13 23:15:26.214313 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:15:26.214914 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-05-13 23:15:26.215831 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-05-13 23:15:26.216411 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-05-13 23:15:26.341809 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:15:26.342844 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:15:26.344732 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-05-13 23:15:26.345613 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-05-13 23:15:26.346469 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-05-13 23:15:26.347421 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:15:26.348271 | orchestrator | 2025-05-13 23:15:26.348858 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-05-13 23:15:26.349569 | orchestrator | Tuesday 13 May 2025 23:15:26 +0000 (0:00:00.733) 0:05:14.923 *********** 2025-05-13 23:15:32.502057 | orchestrator | ok: [testbed-manager] 2025-05-13 23:15:32.502805 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:15:32.505557 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:15:32.506888 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:15:32.508264 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:15:32.509260 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:15:32.510609 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:15:32.511245 | orchestrator | 2025-05-13 23:15:32.511865 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-05-13 23:15:32.512685 | orchestrator | Tuesday 13 May 2025 23:15:32 +0000 (0:00:06.159) 0:05:21.083 *********** 2025-05-13 23:15:33.474068 | orchestrator | ok: [testbed-manager] 2025-05-13 23:15:33.474863 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:15:33.478158 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:15:33.480754 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:15:33.480938 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:15:33.481765 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:15:33.482184 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:15:33.482922 | orchestrator | 2025-05-13 23:15:33.483505 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-05-13 23:15:33.484078 | orchestrator | Tuesday 13 May 2025 23:15:33 +0000 (0:00:00.972) 0:05:22.056 *********** 2025-05-13 23:15:40.982136 | orchestrator | ok: [testbed-manager] 2025-05-13 23:15:40.983447 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:15:40.986325 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:15:40.987170 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:15:40.987654 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:15:40.990336 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:15:40.991101 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:15:40.991867 | orchestrator | 2025-05-13 23:15:40.992697 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-05-13 23:15:40.993557 | orchestrator | Tuesday 13 May 2025 23:15:40 +0000 (0:00:07.504) 0:05:29.561 *********** 2025-05-13 23:15:44.244846 | orchestrator | changed: [testbed-manager] 2025-05-13 23:15:44.245055 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:15:44.245835 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:15:44.249610 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:15:44.250647 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:15:44.251176 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:15:44.252158 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:15:44.253458 | orchestrator | 2025-05-13 23:15:44.254621 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-05-13 23:15:44.255708 | orchestrator | Tuesday 13 May 2025 23:15:44 +0000 (0:00:03.264) 0:05:32.826 *********** 2025-05-13 23:15:45.441017 | orchestrator | ok: [testbed-manager] 2025-05-13 23:15:45.442799 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:15:45.443119 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:15:45.444636 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:15:45.445653 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:15:45.446420 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:15:45.447119 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:15:45.447960 | orchestrator | 2025-05-13 23:15:45.448658 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-05-13 23:15:45.449632 | orchestrator | Tuesday 13 May 2025 23:15:45 +0000 (0:00:01.195) 0:05:34.021 *********** 2025-05-13 23:15:46.698287 | orchestrator | ok: [testbed-manager] 2025-05-13 23:15:46.699574 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:15:46.700544 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:15:46.701238 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:15:46.702553 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:15:46.703194 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:15:46.704673 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:15:46.705902 | orchestrator | 2025-05-13 23:15:46.706209 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-05-13 23:15:46.707633 | orchestrator | Tuesday 13 May 2025 23:15:46 +0000 (0:00:01.253) 0:05:35.275 *********** 2025-05-13 23:15:46.905119 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:15:46.974986 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:15:47.045566 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:15:47.112787 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:15:47.307200 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:15:47.308271 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:15:47.309436 | orchestrator | changed: [testbed-manager] 2025-05-13 23:15:47.310385 | orchestrator | 2025-05-13 23:15:47.311162 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-05-13 23:15:47.312121 | orchestrator | Tuesday 13 May 2025 23:15:47 +0000 (0:00:00.614) 0:05:35.889 *********** 2025-05-13 23:15:56.608023 | orchestrator | ok: [testbed-manager] 2025-05-13 23:15:56.609344 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:15:56.609420 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:15:56.612487 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:15:56.614260 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:15:56.615594 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:15:56.616614 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:15:56.617919 | orchestrator | 2025-05-13 23:15:56.618999 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-05-13 23:15:56.619799 | orchestrator | Tuesday 13 May 2025 23:15:56 +0000 (0:00:09.297) 0:05:45.187 *********** 2025-05-13 23:15:57.861016 | orchestrator | changed: [testbed-manager] 2025-05-13 23:15:57.861116 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:15:57.861435 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:15:57.863168 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:15:57.863719 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:15:57.864231 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:15:57.864736 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:15:57.865357 | orchestrator | 2025-05-13 23:15:57.865965 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-05-13 23:15:57.868971 | orchestrator | Tuesday 13 May 2025 23:15:57 +0000 (0:00:01.251) 0:05:46.438 *********** 2025-05-13 23:16:06.491213 | orchestrator | ok: [testbed-manager] 2025-05-13 23:16:06.496677 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:16:06.498898 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:16:06.498972 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:16:06.499226 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:16:06.501964 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:16:06.502154 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:16:06.502824 | orchestrator | 2025-05-13 23:16:06.503047 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-05-13 23:16:06.503680 | orchestrator | Tuesday 13 May 2025 23:16:06 +0000 (0:00:08.630) 0:05:55.069 *********** 2025-05-13 23:16:17.293453 | orchestrator | ok: [testbed-manager] 2025-05-13 23:16:17.293799 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:16:17.294753 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:16:17.296420 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:16:17.297009 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:16:17.297459 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:16:17.298627 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:16:17.299980 | orchestrator | 2025-05-13 23:16:17.300643 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-05-13 23:16:17.301862 | orchestrator | Tuesday 13 May 2025 23:16:17 +0000 (0:00:10.803) 0:06:05.872 *********** 2025-05-13 23:16:17.649698 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-05-13 23:16:18.505806 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-05-13 23:16:18.507772 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-05-13 23:16:18.507856 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-05-13 23:16:18.512063 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-05-13 23:16:18.512127 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-05-13 23:16:18.521416 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-05-13 23:16:18.521459 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-05-13 23:16:18.524203 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-05-13 23:16:18.524870 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-05-13 23:16:18.525823 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-05-13 23:16:18.526341 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-05-13 23:16:18.528301 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-05-13 23:16:18.529389 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-05-13 23:16:18.529811 | orchestrator | 2025-05-13 23:16:18.530293 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-05-13 23:16:18.530928 | orchestrator | Tuesday 13 May 2025 23:16:18 +0000 (0:00:01.212) 0:06:07.085 *********** 2025-05-13 23:16:18.641138 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:16:18.705100 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:16:18.769173 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:16:18.836492 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:16:18.899334 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:16:19.019459 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:16:19.020237 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:16:19.023182 | orchestrator | 2025-05-13 23:16:19.023199 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-05-13 23:16:19.023439 | orchestrator | Tuesday 13 May 2025 23:16:19 +0000 (0:00:00.513) 0:06:07.598 *********** 2025-05-13 23:16:30.226520 | orchestrator | ok: [testbed-manager] 2025-05-13 23:16:30.227417 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:16:30.227450 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:16:30.227462 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:16:30.227473 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:16:30.228135 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:16:30.228825 | orchestrator | fatal: [testbed-node-0]: FAILED! => changed=false  2025-05-13 23:16:30.229217 | orchestrator |  msg: 'Failure downloading https://github.com/osism/deb-packaging/raw/refs/heads/main/python3-docker/python3-docker_7.1.0-2_all.deb, Connection failure: The read operation timed out' 2025-05-13 23:16:30.230236 | orchestrator | 2025-05-13 23:16:30.231345 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-05-13 23:16:30.231706 | orchestrator | Tuesday 13 May 2025 23:16:30 +0000 (0:00:11.205) 0:06:18.804 *********** 2025-05-13 23:16:30.364918 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:16:30.436784 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:16:30.500844 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:16:30.571777 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:16:30.688977 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:16:30.690413 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:16:30.691859 | orchestrator | 2025-05-13 23:16:30.692191 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-05-13 23:16:30.693904 | orchestrator | Tuesday 13 May 2025 23:16:30 +0000 (0:00:00.466) 0:06:19.271 *********** 2025-05-13 23:16:30.758559 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-05-13 23:16:30.758910 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-05-13 23:16:30.826841 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:16:30.827859 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-05-13 23:16:30.831882 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-05-13 23:16:30.900510 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:16:30.901759 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-05-13 23:16:30.903271 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-05-13 23:16:30.984687 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:16:30.986131 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-05-13 23:16:30.988023 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-05-13 23:16:31.059669 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:16:31.060397 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-05-13 23:16:31.062144 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-05-13 23:16:31.184855 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:16:31.185721 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-05-13 23:16:31.187867 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-05-13 23:16:31.189002 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:16:31.190950 | orchestrator | 2025-05-13 23:16:31.192499 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-05-13 23:16:31.193939 | orchestrator | Tuesday 13 May 2025 23:16:31 +0000 (0:00:00.495) 0:06:19.767 *********** 2025-05-13 23:16:31.316055 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:16:31.396429 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:16:31.462122 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:16:31.525107 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:16:31.635042 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:16:31.636255 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:16:31.637172 | orchestrator | 2025-05-13 23:16:31.638980 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-05-13 23:16:31.639723 | orchestrator | Tuesday 13 May 2025 23:16:31 +0000 (0:00:00.450) 0:06:20.218 *********** 2025-05-13 23:16:31.765605 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:16:31.834608 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:16:31.898446 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:16:31.981957 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:16:32.101069 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:16:32.101418 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:16:32.102695 | orchestrator | 2025-05-13 23:16:32.106202 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-05-13 23:16:32.106300 | orchestrator | Tuesday 13 May 2025 23:16:32 +0000 (0:00:00.465) 0:06:20.683 *********** 2025-05-13 23:16:32.232799 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:16:32.301704 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:16:32.363402 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:16:32.426978 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:16:32.708006 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:16:32.708631 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:16:32.709590 | orchestrator | 2025-05-13 23:16:32.710457 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-05-13 23:16:32.713639 | orchestrator | Tuesday 13 May 2025 23:16:32 +0000 (0:00:00.606) 0:06:21.289 *********** 2025-05-13 23:16:34.335167 | orchestrator | ok: [testbed-manager] 2025-05-13 23:16:34.336074 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:16:34.336112 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:16:34.339365 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:16:34.341274 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:16:34.342805 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:16:34.344314 | orchestrator | 2025-05-13 23:16:34.345401 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-05-13 23:16:34.346492 | orchestrator | Tuesday 13 May 2025 23:16:34 +0000 (0:00:01.626) 0:06:22.916 *********** 2025-05-13 23:16:35.077038 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:16:35.077736 | orchestrator | 2025-05-13 23:16:35.079215 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-05-13 23:16:35.079890 | orchestrator | Tuesday 13 May 2025 23:16:35 +0000 (0:00:00.743) 0:06:23.660 *********** 2025-05-13 23:16:35.866642 | orchestrator | ok: [testbed-manager] 2025-05-13 23:16:35.866752 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:16:35.866836 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:16:35.867854 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:16:35.868293 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:16:35.868852 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:16:35.869358 | orchestrator | 2025-05-13 23:16:35.869614 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-05-13 23:16:35.870138 | orchestrator | Tuesday 13 May 2025 23:16:35 +0000 (0:00:00.786) 0:06:24.446 *********** 2025-05-13 23:16:36.665343 | orchestrator | ok: [testbed-manager] 2025-05-13 23:16:36.668109 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:16:36.668218 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:16:36.668233 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:16:36.668300 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:16:36.670245 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:16:36.671460 | orchestrator | 2025-05-13 23:16:36.672154 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-05-13 23:16:36.672662 | orchestrator | Tuesday 13 May 2025 23:16:36 +0000 (0:00:00.798) 0:06:25.245 *********** 2025-05-13 23:16:38.155378 | orchestrator | ok: [testbed-manager] 2025-05-13 23:16:38.155482 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:16:38.156596 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:16:38.156620 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:16:38.157146 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:16:38.157572 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:16:38.157867 | orchestrator | 2025-05-13 23:16:38.159454 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-05-13 23:16:38.160137 | orchestrator | Tuesday 13 May 2025 23:16:38 +0000 (0:00:01.490) 0:06:26.735 *********** 2025-05-13 23:16:38.296205 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:16:39.473413 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:16:39.476226 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:16:39.476330 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:16:39.477420 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:16:39.479007 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:16:39.480018 | orchestrator | 2025-05-13 23:16:39.481068 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-05-13 23:16:39.481752 | orchestrator | Tuesday 13 May 2025 23:16:39 +0000 (0:00:01.317) 0:06:28.053 *********** 2025-05-13 23:16:40.781688 | orchestrator | ok: [testbed-manager] 2025-05-13 23:16:40.782677 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:16:40.782748 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:16:40.783671 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:16:40.784865 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:16:40.785369 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:16:40.788601 | orchestrator | 2025-05-13 23:16:40.788637 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-05-13 23:16:40.788967 | orchestrator | Tuesday 13 May 2025 23:16:40 +0000 (0:00:01.310) 0:06:29.364 *********** 2025-05-13 23:16:42.255404 | orchestrator | changed: [testbed-manager] 2025-05-13 23:16:42.256106 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:16:42.258686 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:16:42.259334 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:16:42.260671 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:16:42.261756 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:16:42.263854 | orchestrator | 2025-05-13 23:16:42.264051 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-05-13 23:16:42.264787 | orchestrator | Tuesday 13 May 2025 23:16:42 +0000 (0:00:01.473) 0:06:30.837 *********** 2025-05-13 23:16:43.195224 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:16:43.195985 | orchestrator | 2025-05-13 23:16:43.198142 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-05-13 23:16:43.200903 | orchestrator | Tuesday 13 May 2025 23:16:43 +0000 (0:00:00.937) 0:06:31.774 *********** 2025-05-13 23:16:44.548070 | orchestrator | ok: [testbed-manager] 2025-05-13 23:16:44.548627 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:16:44.550268 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:16:44.551063 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:16:44.552157 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:16:44.553748 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:16:44.554922 | orchestrator | 2025-05-13 23:16:44.555776 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-05-13 23:16:44.556501 | orchestrator | Tuesday 13 May 2025 23:16:44 +0000 (0:00:01.352) 0:06:33.126 *********** 2025-05-13 23:16:45.729810 | orchestrator | ok: [testbed-manager] 2025-05-13 23:16:45.729977 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:16:45.730900 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:16:45.732030 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:16:45.734073 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:16:45.736043 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:16:45.736126 | orchestrator | 2025-05-13 23:16:45.736209 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-05-13 23:16:45.738359 | orchestrator | Tuesday 13 May 2025 23:16:45 +0000 (0:00:01.183) 0:06:34.310 *********** 2025-05-13 23:16:46.814963 | orchestrator | ok: [testbed-manager] 2025-05-13 23:16:46.815114 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:16:46.816126 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:16:46.817172 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:16:46.818248 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:16:46.818746 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:16:46.819223 | orchestrator | 2025-05-13 23:16:46.819625 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-05-13 23:16:46.819941 | orchestrator | Tuesday 13 May 2025 23:16:46 +0000 (0:00:01.085) 0:06:35.396 *********** 2025-05-13 23:16:47.905441 | orchestrator | ok: [testbed-manager] 2025-05-13 23:16:47.906547 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:16:47.907776 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:16:47.908692 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:16:47.909674 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:16:47.910585 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:16:47.910862 | orchestrator | 2025-05-13 23:16:47.911767 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-05-13 23:16:47.912340 | orchestrator | Tuesday 13 May 2025 23:16:47 +0000 (0:00:01.087) 0:06:36.484 *********** 2025-05-13 23:16:49.075085 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:16:49.075192 | orchestrator | 2025-05-13 23:16:49.075210 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-13 23:16:49.075224 | orchestrator | Tuesday 13 May 2025 23:16:48 +0000 (0:00:00.908) 0:06:37.392 *********** 2025-05-13 23:16:49.076781 | orchestrator | 2025-05-13 23:16:49.079057 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-13 23:16:49.079125 | orchestrator | Tuesday 13 May 2025 23:16:48 +0000 (0:00:00.045) 0:06:37.438 *********** 2025-05-13 23:16:49.079149 | orchestrator | 2025-05-13 23:16:49.079171 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-13 23:16:49.079906 | orchestrator | Tuesday 13 May 2025 23:16:48 +0000 (0:00:00.038) 0:06:37.477 *********** 2025-05-13 23:16:49.080657 | orchestrator | 2025-05-13 23:16:49.082639 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-13 23:16:49.083354 | orchestrator | Tuesday 13 May 2025 23:16:48 +0000 (0:00:00.045) 0:06:37.522 *********** 2025-05-13 23:16:49.083596 | orchestrator | 2025-05-13 23:16:49.083909 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-13 23:16:49.084607 | orchestrator | Tuesday 13 May 2025 23:16:48 +0000 (0:00:00.052) 0:06:37.575 *********** 2025-05-13 23:16:49.085143 | orchestrator | 2025-05-13 23:16:49.085462 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-13 23:16:49.085928 | orchestrator | Tuesday 13 May 2025 23:16:49 +0000 (0:00:00.039) 0:06:37.615 *********** 2025-05-13 23:16:49.086218 | orchestrator | 2025-05-13 23:16:49.086717 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-05-13 23:16:49.087222 | orchestrator | Tuesday 13 May 2025 23:16:49 +0000 (0:00:00.040) 0:06:37.655 *********** 2025-05-13 23:16:50.098729 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:16:50.100735 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:16:50.101150 | orchestrator | 2025-05-13 23:16:50.102328 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-05-13 23:16:50.103338 | orchestrator | Tuesday 13 May 2025 23:16:50 +0000 (0:00:01.024) 0:06:38.679 *********** 2025-05-13 23:16:51.251420 | orchestrator | changed: [testbed-manager] 2025-05-13 23:16:51.252561 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:16:51.253697 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:16:51.254769 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:16:51.255725 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:16:51.256399 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:16:51.257394 | orchestrator | 2025-05-13 23:16:51.258250 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-05-13 23:16:51.259144 | orchestrator | Tuesday 13 May 2025 23:16:51 +0000 (0:00:01.152) 0:06:39.832 *********** 2025-05-13 23:16:52.256794 | orchestrator | changed: [testbed-manager] 2025-05-13 23:16:52.256890 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:16:52.257445 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:16:52.259043 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:16:52.260387 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:16:52.262567 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:16:52.263246 | orchestrator | 2025-05-13 23:16:52.264144 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-05-13 23:16:52.264741 | orchestrator | Tuesday 13 May 2025 23:16:52 +0000 (0:00:01.002) 0:06:40.834 *********** 2025-05-13 23:16:52.371793 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:16:54.473388 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:16:54.473945 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:16:54.475751 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:16:54.476088 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:16:54.477117 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:16:54.477825 | orchestrator | 2025-05-13 23:16:54.479330 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-05-13 23:16:54.480155 | orchestrator | Tuesday 13 May 2025 23:16:54 +0000 (0:00:02.218) 0:06:43.053 *********** 2025-05-13 23:16:54.607877 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:16:54.608038 | orchestrator | 2025-05-13 23:16:54.608133 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-05-13 23:16:54.608741 | orchestrator | Tuesday 13 May 2025 23:16:54 +0000 (0:00:00.134) 0:06:43.187 *********** 2025-05-13 23:16:55.796353 | orchestrator | ok: [testbed-manager] 2025-05-13 23:16:55.796876 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:16:55.797257 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:16:55.797798 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:16:55.798264 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:16:55.799067 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:16:55.799316 | orchestrator | 2025-05-13 23:16:55.799872 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-05-13 23:16:55.800516 | orchestrator | Tuesday 13 May 2025 23:16:55 +0000 (0:00:01.181) 0:06:44.369 *********** 2025-05-13 23:16:55.947955 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:16:56.017734 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:16:56.096138 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:16:56.164219 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:16:56.303846 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:16:56.304810 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:16:56.304855 | orchestrator | 2025-05-13 23:16:56.307282 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-05-13 23:16:56.308013 | orchestrator | Tuesday 13 May 2025 23:16:56 +0000 (0:00:00.509) 0:06:44.879 *********** 2025-05-13 23:16:57.116370 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:16:57.116673 | orchestrator | 2025-05-13 23:16:57.117594 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-05-13 23:16:57.120879 | orchestrator | Tuesday 13 May 2025 23:16:57 +0000 (0:00:00.816) 0:06:45.695 *********** 2025-05-13 23:16:57.969971 | orchestrator | ok: [testbed-manager] 2025-05-13 23:16:57.970804 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:16:57.971999 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:16:57.972739 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:16:57.973603 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:16:57.974803 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:16:57.975223 | orchestrator | 2025-05-13 23:16:57.975958 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-05-13 23:16:57.976393 | orchestrator | Tuesday 13 May 2025 23:16:57 +0000 (0:00:00.855) 0:06:46.551 *********** 2025-05-13 23:17:00.705754 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-05-13 23:17:00.707631 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-05-13 23:17:00.708776 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-05-13 23:17:00.709870 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-05-13 23:17:00.711192 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-05-13 23:17:00.712525 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-05-13 23:17:00.713784 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-05-13 23:17:00.714569 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-05-13 23:17:00.716242 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-05-13 23:17:00.717575 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-05-13 23:17:00.719032 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-05-13 23:17:00.720739 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-05-13 23:17:00.722380 | orchestrator | 2025-05-13 23:17:00.723083 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-05-13 23:17:00.724186 | orchestrator | Tuesday 13 May 2025 23:17:00 +0000 (0:00:02.732) 0:06:49.283 *********** 2025-05-13 23:17:00.830250 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:17:00.899432 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:17:00.962426 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:17:01.027460 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:17:01.124659 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:17:01.124760 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:17:01.125636 | orchestrator | 2025-05-13 23:17:01.126844 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-05-13 23:17:01.127522 | orchestrator | Tuesday 13 May 2025 23:17:01 +0000 (0:00:00.422) 0:06:49.706 *********** 2025-05-13 23:17:01.855449 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:17:01.855563 | orchestrator | 2025-05-13 23:17:01.859113 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-05-13 23:17:01.859227 | orchestrator | Tuesday 13 May 2025 23:17:01 +0000 (0:00:00.727) 0:06:50.433 *********** 2025-05-13 23:17:02.841258 | orchestrator | ok: [testbed-manager] 2025-05-13 23:17:02.841780 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:17:02.842652 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:17:02.843299 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:17:02.844178 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:17:02.844873 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:17:02.846707 | orchestrator | 2025-05-13 23:17:02.847038 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-05-13 23:17:02.847803 | orchestrator | Tuesday 13 May 2025 23:17:02 +0000 (0:00:00.988) 0:06:51.422 *********** 2025-05-13 23:17:03.599697 | orchestrator | ok: [testbed-manager] 2025-05-13 23:17:03.601942 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:17:03.602109 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:17:03.603115 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:17:03.604123 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:17:03.605034 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:17:03.605735 | orchestrator | 2025-05-13 23:17:03.608764 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-05-13 23:17:03.609746 | orchestrator | Tuesday 13 May 2025 23:17:03 +0000 (0:00:00.759) 0:06:52.182 *********** 2025-05-13 23:17:03.732228 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:17:03.805419 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:17:03.869964 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:17:03.935053 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:17:04.055884 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:17:04.056090 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:17:04.060344 | orchestrator | 2025-05-13 23:17:04.060404 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-05-13 23:17:04.060452 | orchestrator | Tuesday 13 May 2025 23:17:04 +0000 (0:00:00.454) 0:06:52.636 *********** 2025-05-13 23:17:05.432518 | orchestrator | ok: [testbed-manager] 2025-05-13 23:17:05.432934 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:17:05.436474 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:17:05.437888 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:17:05.438201 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:17:05.439335 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:17:05.440429 | orchestrator | 2025-05-13 23:17:05.441399 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-05-13 23:17:05.442195 | orchestrator | Tuesday 13 May 2025 23:17:05 +0000 (0:00:01.375) 0:06:54.012 *********** 2025-05-13 23:17:05.572711 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:17:05.644012 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:17:05.709414 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:17:05.773112 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:17:05.870906 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:17:05.871630 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:17:05.874136 | orchestrator | 2025-05-13 23:17:05.877206 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-05-13 23:17:05.877251 | orchestrator | Tuesday 13 May 2025 23:17:05 +0000 (0:00:00.438) 0:06:54.450 *********** 2025-05-13 23:17:12.953210 | orchestrator | ok: [testbed-manager] 2025-05-13 23:17:12.954306 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:17:12.955886 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:17:12.956769 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:17:12.958737 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:17:12.960225 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:17:12.961213 | orchestrator | 2025-05-13 23:17:12.962120 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-05-13 23:17:12.963328 | orchestrator | Tuesday 13 May 2025 23:17:12 +0000 (0:00:07.084) 0:07:01.535 *********** 2025-05-13 23:17:14.469127 | orchestrator | ok: [testbed-manager] 2025-05-13 23:17:14.469731 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:17:14.470803 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:17:14.472182 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:17:14.472803 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:17:14.473785 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:17:14.475110 | orchestrator | 2025-05-13 23:17:14.476183 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-05-13 23:17:14.476735 | orchestrator | Tuesday 13 May 2025 23:17:14 +0000 (0:00:01.512) 0:07:03.048 *********** 2025-05-13 23:17:16.149613 | orchestrator | ok: [testbed-manager] 2025-05-13 23:17:16.150264 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:17:16.152628 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:17:16.153453 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:17:16.154634 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:17:16.155489 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:17:16.156097 | orchestrator | 2025-05-13 23:17:16.157259 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-05-13 23:17:16.158293 | orchestrator | Tuesday 13 May 2025 23:17:16 +0000 (0:00:01.681) 0:07:04.729 *********** 2025-05-13 23:17:17.703739 | orchestrator | ok: [testbed-manager] 2025-05-13 23:17:17.704020 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:17:17.705188 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:17:17.706371 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:17:17.707107 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:17:17.707798 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:17:17.708746 | orchestrator | 2025-05-13 23:17:17.709158 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-05-13 23:17:17.709790 | orchestrator | Tuesday 13 May 2025 23:17:17 +0000 (0:00:01.554) 0:07:06.284 *********** 2025-05-13 23:17:18.505879 | orchestrator | ok: [testbed-manager] 2025-05-13 23:17:18.506351 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:17:18.507009 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:17:18.507985 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:17:18.508494 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:17:18.509082 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:17:18.509719 | orchestrator | 2025-05-13 23:17:18.510424 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-05-13 23:17:18.511149 | orchestrator | Tuesday 13 May 2025 23:17:18 +0000 (0:00:00.800) 0:07:07.085 *********** 2025-05-13 23:17:18.645690 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:17:18.717713 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:17:18.779753 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:17:18.850684 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:17:19.386974 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:17:19.388399 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:17:19.389322 | orchestrator | 2025-05-13 23:17:19.390484 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-05-13 23:17:19.391452 | orchestrator | Tuesday 13 May 2025 23:17:19 +0000 (0:00:00.881) 0:07:07.966 *********** 2025-05-13 23:17:19.533023 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:17:19.599628 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:17:19.673986 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:17:19.742844 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:17:19.852882 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:17:19.855234 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:17:19.856900 | orchestrator | 2025-05-13 23:17:19.857110 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-05-13 23:17:19.858184 | orchestrator | Tuesday 13 May 2025 23:17:19 +0000 (0:00:00.465) 0:07:08.432 *********** 2025-05-13 23:17:19.991158 | orchestrator | ok: [testbed-manager] 2025-05-13 23:17:20.058416 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:17:20.135082 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:17:20.218133 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:17:20.334308 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:17:20.334724 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:17:20.335113 | orchestrator | 2025-05-13 23:17:20.335518 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-05-13 23:17:20.335993 | orchestrator | Tuesday 13 May 2025 23:17:20 +0000 (0:00:00.481) 0:07:08.913 *********** 2025-05-13 23:17:20.477813 | orchestrator | ok: [testbed-manager] 2025-05-13 23:17:20.549026 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:17:20.618779 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:17:20.689733 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:17:20.797685 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:17:20.799334 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:17:20.800267 | orchestrator | 2025-05-13 23:17:20.801280 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-05-13 23:17:20.802851 | orchestrator | Tuesday 13 May 2025 23:17:20 +0000 (0:00:00.462) 0:07:09.376 *********** 2025-05-13 23:17:20.940712 | orchestrator | ok: [testbed-manager] 2025-05-13 23:17:21.010594 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:17:21.077444 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:17:21.141380 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:17:21.247790 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:17:21.248026 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:17:21.252209 | orchestrator | 2025-05-13 23:17:21.252256 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-05-13 23:17:21.252271 | orchestrator | Tuesday 13 May 2025 23:17:21 +0000 (0:00:00.450) 0:07:09.827 *********** 2025-05-13 23:17:27.061958 | orchestrator | ok: [testbed-manager] 2025-05-13 23:17:27.062216 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:17:27.063699 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:17:27.067137 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:17:27.067171 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:17:27.067182 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:17:27.067287 | orchestrator | 2025-05-13 23:17:27.068202 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-05-13 23:17:27.068943 | orchestrator | Tuesday 13 May 2025 23:17:27 +0000 (0:00:05.813) 0:07:15.641 *********** 2025-05-13 23:17:27.208509 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:17:27.274525 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:17:27.372067 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:17:27.451341 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:17:27.576162 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:17:27.576370 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:17:27.577016 | orchestrator | 2025-05-13 23:17:27.577746 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-05-13 23:17:27.581227 | orchestrator | Tuesday 13 May 2025 23:17:27 +0000 (0:00:00.516) 0:07:16.158 *********** 2025-05-13 23:17:28.321363 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:17:28.321655 | orchestrator | 2025-05-13 23:17:28.328152 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-05-13 23:17:28.328222 | orchestrator | Tuesday 13 May 2025 23:17:28 +0000 (0:00:00.742) 0:07:16.901 *********** 2025-05-13 23:17:30.046169 | orchestrator | ok: [testbed-manager] 2025-05-13 23:17:30.046482 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:17:30.048276 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:17:30.050741 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:17:30.052040 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:17:30.053214 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:17:30.054425 | orchestrator | 2025-05-13 23:17:30.055590 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-05-13 23:17:30.056226 | orchestrator | Tuesday 13 May 2025 23:17:30 +0000 (0:00:01.725) 0:07:18.626 *********** 2025-05-13 23:17:31.327397 | orchestrator | ok: [testbed-manager] 2025-05-13 23:17:31.329110 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:17:31.332130 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:17:31.332158 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:17:31.332170 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:17:31.332739 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:17:31.333428 | orchestrator | 2025-05-13 23:17:31.334139 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-05-13 23:17:31.334672 | orchestrator | Tuesday 13 May 2025 23:17:31 +0000 (0:00:01.280) 0:07:19.907 *********** 2025-05-13 23:17:32.120602 | orchestrator | ok: [testbed-manager] 2025-05-13 23:17:32.122126 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:17:32.122160 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:17:32.122216 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:17:32.123162 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:17:32.123884 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:17:32.127034 | orchestrator | 2025-05-13 23:17:32.127080 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-05-13 23:17:32.127096 | orchestrator | Tuesday 13 May 2025 23:17:32 +0000 (0:00:00.792) 0:07:20.699 *********** 2025-05-13 23:17:33.771450 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-13 23:17:33.775205 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-13 23:17:33.775228 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-13 23:17:33.775235 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-13 23:17:33.776035 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-13 23:17:33.777444 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-13 23:17:33.778426 | orchestrator | 2025-05-13 23:17:33.779277 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-05-13 23:17:33.780323 | orchestrator | Tuesday 13 May 2025 23:17:33 +0000 (0:00:01.650) 0:07:22.350 *********** 2025-05-13 23:17:34.458303 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:17:34.459638 | orchestrator | 2025-05-13 23:17:34.464264 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-05-13 23:17:34.464449 | orchestrator | Tuesday 13 May 2025 23:17:34 +0000 (0:00:00.688) 0:07:23.038 *********** 2025-05-13 23:17:43.135751 | orchestrator | changed: [testbed-manager] 2025-05-13 23:17:43.135866 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:17:43.136679 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:17:43.138085 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:17:43.138748 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:17:43.139532 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:17:43.140159 | orchestrator | 2025-05-13 23:17:43.140887 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-05-13 23:17:43.142687 | orchestrator | Tuesday 13 May 2025 23:17:43 +0000 (0:00:08.672) 0:07:31.711 *********** 2025-05-13 23:17:44.615790 | orchestrator | ok: [testbed-manager] 2025-05-13 23:17:44.620096 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:17:44.620158 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:17:44.620989 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:17:44.621482 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:17:44.622209 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:17:44.626241 | orchestrator | 2025-05-13 23:17:44.626510 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-05-13 23:17:44.627054 | orchestrator | Tuesday 13 May 2025 23:17:44 +0000 (0:00:01.484) 0:07:33.196 *********** 2025-05-13 23:17:45.865968 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:17:45.866087 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:17:45.867926 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:17:45.868338 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:17:45.869756 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:17:45.870384 | orchestrator | 2025-05-13 23:17:45.871303 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-05-13 23:17:45.872142 | orchestrator | Tuesday 13 May 2025 23:17:45 +0000 (0:00:01.248) 0:07:34.444 *********** 2025-05-13 23:17:47.275074 | orchestrator | changed: [testbed-manager] 2025-05-13 23:17:47.276315 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:17:47.278673 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:17:47.280292 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:17:47.281291 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:17:47.282251 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:17:47.283745 | orchestrator | 2025-05-13 23:17:47.284613 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-05-13 23:17:47.285941 | orchestrator | 2025-05-13 23:17:47.286380 | orchestrator | TASK [Include hardening role] ************************************************** 2025-05-13 23:17:47.287154 | orchestrator | Tuesday 13 May 2025 23:17:47 +0000 (0:00:01.411) 0:07:35.856 *********** 2025-05-13 23:17:47.409870 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:17:47.472431 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:17:47.543065 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:17:47.612238 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:17:47.727098 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:17:47.728997 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:17:47.730166 | orchestrator | 2025-05-13 23:17:47.735142 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-05-13 23:17:47.738962 | orchestrator | 2025-05-13 23:17:47.740108 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-05-13 23:17:47.741113 | orchestrator | Tuesday 13 May 2025 23:17:47 +0000 (0:00:00.451) 0:07:36.308 *********** 2025-05-13 23:17:48.998978 | orchestrator | changed: [testbed-manager] 2025-05-13 23:17:48.999090 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:17:49.000274 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:17:49.001141 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:17:49.001878 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:17:49.002510 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:17:49.003238 | orchestrator | 2025-05-13 23:17:49.004284 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-05-13 23:17:49.004819 | orchestrator | Tuesday 13 May 2025 23:17:48 +0000 (0:00:01.271) 0:07:37.579 *********** 2025-05-13 23:17:50.350281 | orchestrator | ok: [testbed-manager] 2025-05-13 23:17:50.350723 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:17:50.352181 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:17:50.354970 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:17:50.355825 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:17:50.356886 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:17:50.358255 | orchestrator | 2025-05-13 23:17:50.358937 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-05-13 23:17:50.359803 | orchestrator | Tuesday 13 May 2025 23:17:50 +0000 (0:00:01.349) 0:07:38.929 *********** 2025-05-13 23:17:50.485938 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:17:50.550519 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:17:50.615856 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:17:50.688728 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:17:51.214178 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:17:51.214540 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:17:51.215193 | orchestrator | 2025-05-13 23:17:51.216625 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-05-13 23:17:51.217534 | orchestrator | Tuesday 13 May 2025 23:17:51 +0000 (0:00:00.865) 0:07:39.794 *********** 2025-05-13 23:17:52.438281 | orchestrator | changed: [testbed-manager] 2025-05-13 23:17:52.439239 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:17:52.439289 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:17:52.443528 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:17:52.444125 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:17:52.444752 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:17:52.445330 | orchestrator | 2025-05-13 23:17:52.446134 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-05-13 23:17:52.446493 | orchestrator | 2025-05-13 23:17:52.447946 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-05-13 23:17:52.448691 | orchestrator | Tuesday 13 May 2025 23:17:52 +0000 (0:00:01.224) 0:07:41.019 *********** 2025-05-13 23:17:53.214408 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:17:53.214728 | orchestrator | 2025-05-13 23:17:53.216544 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-05-13 23:17:53.217341 | orchestrator | Tuesday 13 May 2025 23:17:53 +0000 (0:00:00.775) 0:07:41.795 *********** 2025-05-13 23:17:53.969821 | orchestrator | ok: [testbed-manager] 2025-05-13 23:17:53.970097 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:17:53.970902 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:17:53.971676 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:17:53.974790 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:17:53.975609 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:17:53.976379 | orchestrator | 2025-05-13 23:17:53.977244 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-05-13 23:17:53.977717 | orchestrator | Tuesday 13 May 2025 23:17:53 +0000 (0:00:00.755) 0:07:42.551 *********** 2025-05-13 23:17:55.228728 | orchestrator | changed: [testbed-manager] 2025-05-13 23:17:55.228940 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:17:55.229368 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:17:55.229917 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:17:55.236858 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:17:55.236923 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:17:55.236937 | orchestrator | 2025-05-13 23:17:55.236950 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-05-13 23:17:55.236965 | orchestrator | Tuesday 13 May 2025 23:17:55 +0000 (0:00:01.258) 0:07:43.809 *********** 2025-05-13 23:17:55.988637 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:17:55.990155 | orchestrator | 2025-05-13 23:17:55.991923 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-05-13 23:17:55.992784 | orchestrator | Tuesday 13 May 2025 23:17:55 +0000 (0:00:00.753) 0:07:44.563 *********** 2025-05-13 23:17:56.819622 | orchestrator | ok: [testbed-manager] 2025-05-13 23:17:56.819821 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:17:56.823002 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:17:56.823076 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:17:56.823667 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:17:56.824746 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:17:56.825642 | orchestrator | 2025-05-13 23:17:56.826744 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-05-13 23:17:56.827353 | orchestrator | Tuesday 13 May 2025 23:17:56 +0000 (0:00:00.836) 0:07:45.399 *********** 2025-05-13 23:17:58.038592 | orchestrator | changed: [testbed-manager] 2025-05-13 23:17:58.039782 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:17:58.040808 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:17:58.042083 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:17:58.042820 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:17:58.043862 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:17:58.044792 | orchestrator | 2025-05-13 23:17:58.044895 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 23:17:58.045143 | orchestrator | 2025-05-13 23:17:58 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-13 23:17:58.045610 | orchestrator | 2025-05-13 23:17:58 | INFO  | Please wait and do not abort execution. 2025-05-13 23:17:58.046587 | orchestrator | testbed-manager : ok=162  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-05-13 23:17:58.047037 | orchestrator | testbed-node-0 : ok=115  changed=44  unreachable=0 failed=1  skipped=22  rescued=0 ignored=0 2025-05-13 23:17:58.047259 | orchestrator | testbed-node-1 : ok=170  changed=66  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-05-13 23:17:58.048464 | orchestrator | testbed-node-2 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-13 23:17:58.048873 | orchestrator | testbed-node-3 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-13 23:17:58.049515 | orchestrator | testbed-node-4 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-13 23:17:58.049873 | orchestrator | testbed-node-5 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-13 23:17:58.050608 | orchestrator | 2025-05-13 23:17:58.051662 | orchestrator | 2025-05-13 23:17:58.053183 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 23:17:58.053197 | orchestrator | Tuesday 13 May 2025 23:17:58 +0000 (0:00:01.220) 0:07:46.620 *********** 2025-05-13 23:17:58.053442 | orchestrator | =============================================================================== 2025-05-13 23:17:58.053944 | orchestrator | osism.commons.packages : Install required packages --------------------- 75.76s 2025-05-13 23:17:58.054503 | orchestrator | osism.commons.packages : Download required packages -------------------- 37.25s 2025-05-13 23:17:58.055054 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 33.74s 2025-05-13 23:17:58.055178 | orchestrator | osism.commons.repository : Update package cache ------------------------ 13.79s 2025-05-13 23:17:58.056109 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 11.89s 2025-05-13 23:17:58.059617 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 11.63s 2025-05-13 23:17:58.063162 | orchestrator | osism.services.docker : Install python3 docker package from Debian Sid -- 11.21s 2025-05-13 23:17:58.063807 | orchestrator | osism.services.docker : Install docker package ------------------------- 10.80s 2025-05-13 23:17:58.064397 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.30s 2025-05-13 23:17:58.065157 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 8.67s 2025-05-13 23:17:58.065826 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 8.63s 2025-05-13 23:17:58.066394 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.34s 2025-05-13 23:17:58.066978 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 7.73s 2025-05-13 23:17:58.067535 | orchestrator | osism.services.rng : Install rng package -------------------------------- 7.63s 2025-05-13 23:17:58.068720 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.50s 2025-05-13 23:17:58.068940 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.14s 2025-05-13 23:17:58.069402 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.08s 2025-05-13 23:17:58.069582 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.16s 2025-05-13 23:17:58.069820 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.99s 2025-05-13 23:17:58.070272 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.81s 2025-05-13 23:17:58.504392 | orchestrator | 2025-05-13 23:17:58 | INFO  | Task 31c3d8f2-9d13-453f-960d-b5171692826f (bootstrap) was prepared for execution. 2025-05-13 23:17:58.504493 | orchestrator | 2025-05-13 23:17:58 | INFO  | It takes a moment until task 31c3d8f2-9d13-453f-960d-b5171692826f (bootstrap) has been started and output is visible here. 2025-05-13 23:18:03.085196 | orchestrator | 2025-05-13 23:18:03.086180 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-05-13 23:18:03.086283 | orchestrator | 2025-05-13 23:18:03.088763 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-05-13 23:18:03.088877 | orchestrator | Tuesday 13 May 2025 23:18:03 +0000 (0:00:00.252) 0:00:00.252 *********** 2025-05-13 23:18:03.256250 | orchestrator | ok: [testbed-manager] 2025-05-13 23:18:03.343291 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:18:03.431859 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:18:03.629396 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:18:03.725454 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:18:03.878841 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:18:03.879604 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:18:03.881008 | orchestrator | 2025-05-13 23:18:03.882137 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-13 23:18:03.883495 | orchestrator | 2025-05-13 23:18:03.884468 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-13 23:18:03.885253 | orchestrator | Tuesday 13 May 2025 23:18:03 +0000 (0:00:00.795) 0:00:01.047 *********** 2025-05-13 23:18:08.342822 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:18:08.343881 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:18:08.345223 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:18:08.346698 | orchestrator | ok: [testbed-manager] 2025-05-13 23:18:08.347742 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:18:08.348878 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:18:08.349615 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:18:08.351094 | orchestrator | 2025-05-13 23:18:08.351732 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-05-13 23:18:08.352921 | orchestrator | 2025-05-13 23:18:08.354064 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-13 23:18:08.354725 | orchestrator | Tuesday 13 May 2025 23:18:08 +0000 (0:00:04.461) 0:00:05.509 *********** 2025-05-13 23:18:08.811815 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-05-13 23:18:08.818850 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-05-13 23:18:08.818917 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-05-13 23:18:08.968637 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-13 23:18:08.968953 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-05-13 23:18:08.969959 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-05-13 23:18:08.969987 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-13 23:18:08.970617 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-13 23:18:09.142390 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-05-13 23:18:09.142836 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-13 23:18:09.144830 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-13 23:18:09.144864 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-05-13 23:18:09.144876 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-13 23:18:09.144887 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-13 23:18:09.144899 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-05-13 23:18:09.324245 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-13 23:18:09.324388 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-13 23:18:09.324404 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-05-13 23:18:09.324416 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-13 23:18:09.324427 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-05-13 23:18:09.324515 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-05-13 23:18:09.324922 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-13 23:18:10.104150 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-13 23:18:10.104627 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-13 23:18:10.104999 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:18:10.106551 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-13 23:18:10.107894 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-13 23:18:10.108337 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-05-13 23:18:10.109919 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:18:10.111275 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-13 23:18:10.112118 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:18:10.112876 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-13 23:18:10.113405 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-13 23:18:10.114065 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-13 23:18:10.114975 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-05-13 23:18:10.116138 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-13 23:18:10.116647 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-13 23:18:10.117054 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-13 23:18:10.117512 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-13 23:18:10.118228 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-13 23:18:10.118412 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-13 23:18:10.118692 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-13 23:18:10.119102 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-13 23:18:10.119548 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-13 23:18:10.119931 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-13 23:18:10.120369 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-13 23:18:10.120957 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:18:10.121598 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-13 23:18:10.122187 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-13 23:18:10.122615 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:18:10.124416 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-13 23:18:10.125346 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:18:10.126473 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-13 23:18:10.127287 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-13 23:18:10.128407 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-13 23:18:10.128730 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:18:10.129187 | orchestrator | 2025-05-13 23:18:10.129636 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-05-13 23:18:10.130113 | orchestrator | 2025-05-13 23:18:10.130738 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-05-13 23:18:10.130957 | orchestrator | Tuesday 13 May 2025 23:18:10 +0000 (0:00:01.762) 0:00:07.272 *********** 2025-05-13 23:18:11.522321 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:18:11.523381 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:18:11.524782 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:18:11.525680 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:18:11.526835 | orchestrator | ok: [testbed-manager] 2025-05-13 23:18:11.527912 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:18:11.531370 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:18:11.534415 | orchestrator | 2025-05-13 23:18:11.534774 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-05-13 23:18:11.536100 | orchestrator | Tuesday 13 May 2025 23:18:11 +0000 (0:00:01.420) 0:00:08.693 *********** 2025-05-13 23:18:13.656959 | orchestrator | ok: [testbed-manager] 2025-05-13 23:18:13.657630 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:18:13.659129 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:18:13.660618 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:18:13.661812 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:18:13.663062 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:18:13.663963 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:18:13.665734 | orchestrator | 2025-05-13 23:18:13.666459 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-05-13 23:18:13.667231 | orchestrator | Tuesday 13 May 2025 23:18:13 +0000 (0:00:02.129) 0:00:10.822 *********** 2025-05-13 23:18:15.022258 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:18:15.022368 | orchestrator | 2025-05-13 23:18:15.022896 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-05-13 23:18:15.025774 | orchestrator | Tuesday 13 May 2025 23:18:15 +0000 (0:00:01.365) 0:00:12.188 *********** 2025-05-13 23:18:22.125288 | orchestrator | ok: [testbed-manager] 2025-05-13 23:18:22.125400 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:18:22.125556 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:18:22.129236 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:18:22.129350 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:18:22.129755 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:18:22.130249 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:18:22.130806 | orchestrator | 2025-05-13 23:18:22.131041 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-05-13 23:18:22.131778 | orchestrator | Tuesday 13 May 2025 23:18:22 +0000 (0:00:07.105) 0:00:19.293 *********** 2025-05-13 23:18:22.268991 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:18:23.116806 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:18:23.117237 | orchestrator | 2025-05-13 23:18:23.118248 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-05-13 23:18:23.119464 | orchestrator | Tuesday 13 May 2025 23:18:23 +0000 (0:00:00.991) 0:00:20.285 *********** 2025-05-13 23:18:24.534792 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:18:24.535349 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:18:24.537013 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:18:24.539247 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:18:24.539272 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:18:24.540244 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:18:24.542939 | orchestrator | 2025-05-13 23:18:24.542965 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-05-13 23:18:24.542979 | orchestrator | Tuesday 13 May 2025 23:18:24 +0000 (0:00:01.418) 0:00:21.704 *********** 2025-05-13 23:18:24.705913 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:18:25.694948 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:18:25.697120 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:18:25.699092 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:18:25.699738 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:18:25.701193 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:18:25.702149 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:18:25.703280 | orchestrator | 2025-05-13 23:18:25.703886 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-05-13 23:18:25.704936 | orchestrator | Tuesday 13 May 2025 23:18:25 +0000 (0:00:01.155) 0:00:22.859 *********** 2025-05-13 23:18:25.988817 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:18:26.080383 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:18:26.178621 | orchestrator | ok: [testbed-manager] 2025-05-13 23:18:26.272129 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:18:27.050372 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:18:27.050810 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:18:27.052719 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:18:27.054152 | orchestrator | 2025-05-13 23:18:27.055442 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-05-13 23:18:27.056228 | orchestrator | Tuesday 13 May 2025 23:18:27 +0000 (0:00:01.356) 0:00:24.216 *********** 2025-05-13 23:18:27.225324 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:18:27.304692 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:18:27.389071 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:18:27.479416 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:18:27.587315 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:18:27.711901 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:18:27.718155 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:18:27.718214 | orchestrator | 2025-05-13 23:18:27.718223 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-05-13 23:18:27.718232 | orchestrator | Tuesday 13 May 2025 23:18:27 +0000 (0:00:00.664) 0:00:24.880 *********** 2025-05-13 23:18:28.957568 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:18:28.958105 | orchestrator | 2025-05-13 23:18:28.958996 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-05-13 23:18:28.959751 | orchestrator | Tuesday 13 May 2025 23:18:28 +0000 (0:00:01.243) 0:00:26.124 *********** 2025-05-13 23:18:30.260664 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:18:30.262127 | orchestrator | 2025-05-13 23:18:30.264887 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-05-13 23:18:30.265315 | orchestrator | Tuesday 13 May 2025 23:18:30 +0000 (0:00:01.300) 0:00:27.425 *********** 2025-05-13 23:18:32.031299 | orchestrator | ok: [testbed-manager] 2025-05-13 23:18:32.032480 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:18:32.033714 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:18:32.034638 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:18:32.035449 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:18:32.036998 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:18:32.038006 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:18:32.039197 | orchestrator | 2025-05-13 23:18:32.040728 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-05-13 23:18:32.042065 | orchestrator | Tuesday 13 May 2025 23:18:32 +0000 (0:00:01.772) 0:00:29.197 *********** 2025-05-13 23:18:32.198857 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:18:32.281977 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:18:32.364962 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:18:32.457822 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:18:32.542510 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:18:32.672910 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:18:32.674318 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:18:32.675129 | orchestrator | 2025-05-13 23:18:32.676326 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-05-13 23:18:32.677521 | orchestrator | Tuesday 13 May 2025 23:18:32 +0000 (0:00:00.644) 0:00:29.842 *********** 2025-05-13 23:18:33.191852 | orchestrator | ok: [testbed-manager] 2025-05-13 23:18:33.279729 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:18:33.926138 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:18:33.928076 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:18:33.929097 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:18:33.930866 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:18:33.932859 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:18:33.936381 | orchestrator | 2025-05-13 23:18:33.939807 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-05-13 23:18:33.940897 | orchestrator | Tuesday 13 May 2025 23:18:33 +0000 (0:00:01.250) 0:00:31.093 *********** 2025-05-13 23:18:34.199000 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:18:34.287018 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:18:34.380978 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:18:34.477635 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:18:34.630087 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:18:34.630819 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:18:34.631135 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:18:34.631906 | orchestrator | 2025-05-13 23:18:34.632562 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-05-13 23:18:34.638778 | orchestrator | Tuesday 13 May 2025 23:18:34 +0000 (0:00:00.707) 0:00:31.800 *********** 2025-05-13 23:18:35.131128 | orchestrator | ok: [testbed-manager] 2025-05-13 23:18:35.447536 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:18:35.922618 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:18:35.923564 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:18:35.924304 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:18:35.925616 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:18:35.927145 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:18:35.928056 | orchestrator | 2025-05-13 23:18:35.928686 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-05-13 23:18:35.929567 | orchestrator | Tuesday 13 May 2025 23:18:35 +0000 (0:00:01.291) 0:00:33.092 *********** 2025-05-13 23:18:37.544273 | orchestrator | ok: [testbed-manager] 2025-05-13 23:18:37.545574 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:18:37.547715 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:18:37.548343 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:18:37.549758 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:18:37.550801 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:18:37.551531 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:18:37.552514 | orchestrator | 2025-05-13 23:18:37.553774 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-05-13 23:18:37.554683 | orchestrator | Tuesday 13 May 2025 23:18:37 +0000 (0:00:01.616) 0:00:34.708 *********** 2025-05-13 23:18:38.813192 | orchestrator | ok: [testbed-manager] 2025-05-13 23:18:38.814284 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:18:38.818175 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:18:38.821301 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:18:38.821335 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:18:38.821347 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:18:38.821710 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:18:38.822787 | orchestrator | 2025-05-13 23:18:38.823217 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-05-13 23:18:38.824389 | orchestrator | Tuesday 13 May 2025 23:18:38 +0000 (0:00:01.269) 0:00:35.978 *********** 2025-05-13 23:18:40.190201 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:18:40.192079 | orchestrator | 2025-05-13 23:18:40.192412 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-05-13 23:18:40.196690 | orchestrator | Tuesday 13 May 2025 23:18:40 +0000 (0:00:01.377) 0:00:37.355 *********** 2025-05-13 23:18:40.366666 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:18:40.455705 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:18:40.552961 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:18:40.660855 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:18:40.945759 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:18:41.553670 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:18:41.553886 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:18:41.555677 | orchestrator | 2025-05-13 23:18:41.556008 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-05-13 23:18:41.556846 | orchestrator | Tuesday 13 May 2025 23:18:41 +0000 (0:00:01.365) 0:00:38.721 *********** 2025-05-13 23:18:41.724083 | orchestrator | ok: [testbed-manager] 2025-05-13 23:18:41.814917 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:18:41.917258 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:18:42.207123 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:18:42.307737 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:18:42.445331 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:18:42.447261 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:18:42.448920 | orchestrator | 2025-05-13 23:18:42.449882 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-05-13 23:18:42.451178 | orchestrator | Tuesday 13 May 2025 23:18:42 +0000 (0:00:00.891) 0:00:39.613 *********** 2025-05-13 23:18:42.621005 | orchestrator | ok: [testbed-manager] 2025-05-13 23:18:42.703985 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:18:42.787717 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:18:42.882702 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:18:42.971878 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:18:43.129448 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:18:43.129549 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:18:43.129565 | orchestrator | 2025-05-13 23:18:43.129707 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-05-13 23:18:43.129726 | orchestrator | Tuesday 13 May 2025 23:18:43 +0000 (0:00:00.685) 0:00:40.299 *********** 2025-05-13 23:18:43.325747 | orchestrator | ok: [testbed-manager] 2025-05-13 23:18:43.417792 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:18:43.705855 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:18:43.796711 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:18:43.884168 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:18:44.025866 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:18:44.026754 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:18:44.027814 | orchestrator | 2025-05-13 23:18:44.032108 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-05-13 23:18:44.032346 | orchestrator | Tuesday 13 May 2025 23:18:44 +0000 (0:00:00.892) 0:00:41.192 *********** 2025-05-13 23:18:45.269212 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:18:45.269974 | orchestrator | 2025-05-13 23:18:45.271752 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-05-13 23:18:45.272469 | orchestrator | Tuesday 13 May 2025 23:18:45 +0000 (0:00:01.243) 0:00:42.435 *********** 2025-05-13 23:18:45.721487 | orchestrator | ok: [testbed-manager] 2025-05-13 23:18:45.803012 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:18:46.221479 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:18:46.221734 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:18:46.222341 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:18:46.223472 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:18:46.224132 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:18:46.224325 | orchestrator | 2025-05-13 23:18:46.224679 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-05-13 23:18:46.225246 | orchestrator | Tuesday 13 May 2025 23:18:46 +0000 (0:00:00.956) 0:00:43.392 *********** 2025-05-13 23:18:46.607468 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:18:46.698859 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:18:46.791701 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:18:46.875735 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:18:46.972290 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:18:47.119170 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:18:47.120146 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:18:47.121695 | orchestrator | 2025-05-13 23:18:47.123705 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-05-13 23:18:47.123859 | orchestrator | Tuesday 13 May 2025 23:18:47 +0000 (0:00:00.893) 0:00:44.285 *********** 2025-05-13 23:18:48.613738 | orchestrator | ok: [testbed-manager] 2025-05-13 23:18:48.613926 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:18:48.614071 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:18:48.616072 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:18:48.617491 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:18:48.619535 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:18:48.621798 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:18:48.622649 | orchestrator | 2025-05-13 23:18:48.623320 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-05-13 23:18:48.624341 | orchestrator | Tuesday 13 May 2025 23:18:48 +0000 (0:00:01.493) 0:00:45.779 *********** 2025-05-13 23:18:49.130994 | orchestrator | ok: [testbed-manager] 2025-05-13 23:18:49.414575 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:18:49.898235 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:18:49.899270 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:18:49.900905 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:18:49.902324 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:18:49.903400 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:18:49.904688 | orchestrator | 2025-05-13 23:18:49.905777 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-05-13 23:18:49.907025 | orchestrator | Tuesday 13 May 2025 23:18:49 +0000 (0:00:01.286) 0:00:47.065 *********** 2025-05-13 23:18:51.575817 | orchestrator | ok: [testbed-manager] 2025-05-13 23:18:51.578474 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:18:51.579917 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:18:51.581200 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:18:51.583511 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:18:51.583998 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:18:51.585324 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:18:51.586328 | orchestrator | 2025-05-13 23:18:51.587484 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-05-13 23:18:51.588611 | orchestrator | Tuesday 13 May 2025 23:18:51 +0000 (0:00:01.669) 0:00:48.735 *********** 2025-05-13 23:18:55.563509 | orchestrator | changed: [testbed-manager] 2025-05-13 23:18:55.564249 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:18:55.566216 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:18:55.567448 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:18:55.568902 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:18:55.569409 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:18:55.570470 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:18:55.571541 | orchestrator | 2025-05-13 23:18:55.571572 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-05-13 23:18:55.572193 | orchestrator | Tuesday 13 May 2025 23:18:55 +0000 (0:00:03.992) 0:00:52.727 *********** 2025-05-13 23:18:55.738698 | orchestrator | ok: [testbed-manager] 2025-05-13 23:18:55.828503 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:18:55.911166 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:18:55.996110 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:18:56.084194 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:18:56.215968 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:18:56.223968 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:18:56.224060 | orchestrator | 2025-05-13 23:18:56.224086 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-05-13 23:18:56.224108 | orchestrator | Tuesday 13 May 2025 23:18:56 +0000 (0:00:00.653) 0:00:53.381 *********** 2025-05-13 23:18:56.395148 | orchestrator | ok: [testbed-manager] 2025-05-13 23:18:56.478177 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:18:56.557148 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:18:56.858237 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:18:56.944181 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:18:57.085511 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:18:57.088686 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:18:57.092020 | orchestrator | 2025-05-13 23:18:57.092060 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-05-13 23:18:57.092075 | orchestrator | Tuesday 13 May 2025 23:18:57 +0000 (0:00:00.871) 0:00:54.252 *********** 2025-05-13 23:18:57.270449 | orchestrator | ok: [testbed-manager] 2025-05-13 23:18:57.367972 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:18:57.454094 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:18:57.537368 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:18:57.633132 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:18:57.766297 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:18:57.766950 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:18:57.767933 | orchestrator | 2025-05-13 23:18:57.768913 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-05-13 23:18:57.772524 | orchestrator | Tuesday 13 May 2025 23:18:57 +0000 (0:00:00.683) 0:00:54.936 *********** 2025-05-13 23:18:59.034158 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:18:59.034451 | orchestrator | 2025-05-13 23:18:59.034963 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-05-13 23:18:59.035207 | orchestrator | Tuesday 13 May 2025 23:18:59 +0000 (0:00:01.262) 0:00:56.199 *********** 2025-05-13 23:19:01.405451 | orchestrator | ok: [testbed-manager] 2025-05-13 23:19:01.405952 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:19:01.406898 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:19:01.409032 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:19:01.409923 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:19:01.411002 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:19:01.412120 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:19:01.413084 | orchestrator | 2025-05-13 23:19:01.414225 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-05-13 23:19:01.415114 | orchestrator | Tuesday 13 May 2025 23:19:01 +0000 (0:00:02.372) 0:00:58.571 *********** 2025-05-13 23:19:02.846998 | orchestrator | ok: [testbed-manager] 2025-05-13 23:19:02.851443 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:19:02.851534 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:19:02.851556 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:19:02.858447 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:19:02.858504 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:19:02.860868 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:19:02.861054 | orchestrator | 2025-05-13 23:19:02.861897 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-05-13 23:19:02.863910 | orchestrator | Tuesday 13 May 2025 23:19:02 +0000 (0:00:01.444) 0:01:00.016 *********** 2025-05-13 23:19:04.266088 | orchestrator | ok: [testbed-manager] 2025-05-13 23:19:04.266649 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:19:04.267389 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:19:04.268748 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:19:04.269886 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:19:04.271316 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:19:04.272050 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:19:04.273003 | orchestrator | 2025-05-13 23:19:04.273713 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-05-13 23:19:04.274278 | orchestrator | Tuesday 13 May 2025 23:19:04 +0000 (0:00:01.416) 0:01:01.432 *********** 2025-05-13 23:19:05.457664 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:19:05.457815 | orchestrator | 2025-05-13 23:19:05.459262 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-05-13 23:19:05.460767 | orchestrator | Tuesday 13 May 2025 23:19:05 +0000 (0:00:01.191) 0:01:02.624 *********** 2025-05-13 23:19:06.893017 | orchestrator | ok: [testbed-manager] 2025-05-13 23:19:06.893560 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:19:06.895169 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:19:06.897004 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:19:06.897928 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:19:06.899208 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:19:06.899277 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:19:06.900109 | orchestrator | 2025-05-13 23:19:06.900752 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-05-13 23:19:06.901677 | orchestrator | Tuesday 13 May 2025 23:19:06 +0000 (0:00:01.438) 0:01:04.062 *********** 2025-05-13 23:19:07.268520 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:19:07.353055 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:19:07.437914 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:19:07.526276 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:19:07.613061 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:19:08.354264 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:19:08.355200 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:19:08.360029 | orchestrator | 2025-05-13 23:19:08.360070 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-05-13 23:19:08.360085 | orchestrator | Tuesday 13 May 2025 23:19:08 +0000 (0:00:01.457) 0:01:05.520 *********** 2025-05-13 23:19:12.021351 | orchestrator | ok: [testbed-manager] 2025-05-13 23:19:12.021843 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:19:12.026786 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:19:12.026831 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:19:12.026843 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:19:12.027743 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:19:12.028049 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:19:12.029148 | orchestrator | 2025-05-13 23:19:12.029792 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-05-13 23:19:12.030634 | orchestrator | Tuesday 13 May 2025 23:19:12 +0000 (0:00:03.669) 0:01:09.189 *********** 2025-05-13 23:19:14.109033 | orchestrator | ok: [testbed-manager] 2025-05-13 23:19:14.110108 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:19:14.110725 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:19:14.111814 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:19:14.113038 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:19:14.113753 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:19:14.115840 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:19:14.115923 | orchestrator | 2025-05-13 23:19:14.115987 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-05-13 23:19:14.116864 | orchestrator | Tuesday 13 May 2025 23:19:14 +0000 (0:00:02.084) 0:01:11.274 *********** 2025-05-13 23:19:14.896826 | orchestrator | ok: [testbed-manager] 2025-05-13 23:19:16.070678 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:19:16.070834 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:19:16.070958 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:19:16.071265 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:19:16.071987 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:19:16.072787 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:19:16.073087 | orchestrator | 2025-05-13 23:19:16.074117 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-05-13 23:19:16.075187 | orchestrator | Tuesday 13 May 2025 23:19:16 +0000 (0:00:01.960) 0:01:13.235 *********** 2025-05-13 23:19:16.252649 | orchestrator | ok: [testbed-manager] 2025-05-13 23:19:16.336087 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:19:16.590383 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:19:16.677005 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:19:16.760832 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:19:16.880589 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:19:16.880824 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:19:16.882843 | orchestrator | 2025-05-13 23:19:16.883320 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-05-13 23:19:16.884176 | orchestrator | Tuesday 13 May 2025 23:19:16 +0000 (0:00:00.811) 0:01:14.047 *********** 2025-05-13 23:19:17.072795 | orchestrator | ok: [testbed-manager] 2025-05-13 23:19:17.152783 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:19:17.237331 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:19:17.317765 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:19:17.408038 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:19:17.532399 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:19:17.533792 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:19:17.534902 | orchestrator | 2025-05-13 23:19:17.535147 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-05-13 23:19:17.536202 | orchestrator | Tuesday 13 May 2025 23:19:17 +0000 (0:00:00.654) 0:01:14.701 *********** 2025-05-13 23:19:18.806380 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:19:18.807281 | orchestrator | 2025-05-13 23:19:18.808422 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-05-13 23:19:18.809154 | orchestrator | Tuesday 13 May 2025 23:19:18 +0000 (0:00:01.268) 0:01:15.970 *********** 2025-05-13 23:19:21.002290 | orchestrator | ok: [testbed-manager] 2025-05-13 23:19:21.005158 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:19:21.005229 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:19:21.006942 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:19:21.009060 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:19:21.009226 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:19:21.010275 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:19:21.012264 | orchestrator | 2025-05-13 23:19:21.012410 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-05-13 23:19:21.013182 | orchestrator | Tuesday 13 May 2025 23:19:20 +0000 (0:00:02.196) 0:01:18.166 *********** 2025-05-13 23:19:21.975672 | orchestrator | ok: [testbed-manager] 2025-05-13 23:19:21.975947 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:19:21.978551 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:19:21.980234 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:19:21.980663 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:19:21.981785 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:19:21.983088 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:19:21.983863 | orchestrator | 2025-05-13 23:19:21.984829 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-05-13 23:19:21.985326 | orchestrator | Tuesday 13 May 2025 23:19:21 +0000 (0:00:00.978) 0:01:19.145 *********** 2025-05-13 23:19:22.350552 | orchestrator | ok: [testbed-manager] 2025-05-13 23:19:22.443256 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:19:22.538802 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:19:22.629148 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:19:22.723908 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:19:22.871394 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:19:22.872480 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:19:22.874999 | orchestrator | 2025-05-13 23:19:22.875804 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-05-13 23:19:22.877390 | orchestrator | Tuesday 13 May 2025 23:19:22 +0000 (0:00:00.892) 0:01:20.037 *********** 2025-05-13 23:19:24.438972 | orchestrator | ok: [testbed-manager] 2025-05-13 23:19:24.440199 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:19:24.441359 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:19:24.444245 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:19:24.447236 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:19:24.448681 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:19:24.449721 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:19:24.450667 | orchestrator | 2025-05-13 23:19:24.451700 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-05-13 23:19:24.452727 | orchestrator | Tuesday 13 May 2025 23:19:24 +0000 (0:00:01.566) 0:01:21.604 *********** 2025-05-13 23:19:26.725068 | orchestrator | changed: [testbed-manager] 2025-05-13 23:19:26.726769 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:19:26.726803 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:19:26.726815 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:19:26.726827 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:19:26.726838 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:19:26.726928 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:19:26.727568 | orchestrator | 2025-05-13 23:19:26.729946 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-05-13 23:19:26.730528 | orchestrator | Tuesday 13 May 2025 23:19:26 +0000 (0:00:02.285) 0:01:23.890 *********** 2025-05-13 23:19:29.517162 | orchestrator | ok: [testbed-manager] 2025-05-13 23:19:29.517342 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:19:29.518273 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:19:29.522263 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:19:29.522314 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:19:29.522327 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:19:29.522339 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:19:29.522377 | orchestrator | 2025-05-13 23:19:29.524813 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-05-13 23:19:29.525287 | orchestrator | Tuesday 13 May 2025 23:19:29 +0000 (0:00:02.793) 0:01:26.684 *********** 2025-05-13 23:19:31.298249 | orchestrator | changed: [testbed-manager] 2025-05-13 23:19:31.298343 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:19:31.298405 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:19:31.298873 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:19:31.299226 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:19:31.299689 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:19:31.302933 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:19:31.302990 | orchestrator | 2025-05-13 23:19:31.303000 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-05-13 23:19:31.303009 | orchestrator | Tuesday 13 May 2025 23:19:31 +0000 (0:00:01.782) 0:01:28.466 *********** 2025-05-13 23:19:33.459106 | orchestrator | ok: [testbed-manager] 2025-05-13 23:19:33.461152 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:19:33.461196 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:19:33.463268 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:19:33.465115 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:19:33.466166 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:19:33.467212 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:19:33.468276 | orchestrator | 2025-05-13 23:19:33.469295 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-05-13 23:19:33.470103 | orchestrator | Tuesday 13 May 2025 23:19:33 +0000 (0:00:02.157) 0:01:30.624 *********** 2025-05-13 23:19:35.521629 | orchestrator | ok: [testbed-manager] 2025-05-13 23:19:35.521862 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:19:35.522857 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:19:35.525687 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:19:35.530173 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:19:35.531121 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:19:35.532367 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:19:35.532908 | orchestrator | 2025-05-13 23:19:35.533804 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-05-13 23:19:35.534674 | orchestrator | Tuesday 13 May 2025 23:19:35 +0000 (0:00:02.065) 0:01:32.689 *********** 2025-05-13 23:19:38.501961 | orchestrator | ok: [testbed-manager] 2025-05-13 23:19:38.503240 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:19:38.506325 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:19:38.507108 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:19:38.510774 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:19:38.511119 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:19:38.513662 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:19:38.514555 | orchestrator | 2025-05-13 23:19:38.515838 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-05-13 23:19:38.517119 | orchestrator | Tuesday 13 May 2025 23:19:38 +0000 (0:00:02.976) 0:01:35.666 *********** 2025-05-13 23:19:39.831102 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-05-13 23:19:39.832550 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-05-13 23:19:39.833754 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-05-13 23:19:39.834919 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-05-13 23:19:39.836308 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-05-13 23:19:39.836905 | orchestrator | 2025-05-13 23:19:39.837759 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-05-13 23:19:39.838529 | orchestrator | Tuesday 13 May 2025 23:19:39 +0000 (0:00:01.330) 0:01:36.997 *********** 2025-05-13 23:19:39.912808 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-13 23:19:40.008085 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:19:40.508052 | orchestrator | ok: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-13 23:19:40.509188 | orchestrator | ok: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-13 23:19:40.511851 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-13 23:19:40.599533 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-13 23:19:40.770583 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:19:40.771117 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:19:40.772266 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-13 23:19:40.773821 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:19:40.774442 | orchestrator | ok: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-13 23:19:40.775561 | orchestrator | 2025-05-13 23:19:40.776294 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-05-13 23:19:40.777187 | orchestrator | Tuesday 13 May 2025 23:19:40 +0000 (0:00:00.940) 0:01:37.937 *********** 2025-05-13 23:19:40.867261 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-13 23:19:40.868158 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-13 23:19:40.870677 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-13 23:19:40.871394 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-13 23:19:40.872382 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-13 23:19:40.873078 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-13 23:19:40.874200 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-13 23:19:40.874729 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-13 23:19:40.878531 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-13 23:19:40.878678 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-13 23:19:40.957659 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:19:41.238410 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-13 23:19:41.238748 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-13 23:19:41.239880 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-13 23:19:41.240301 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-13 23:19:41.242313 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-13 23:19:41.242339 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-13 23:19:41.243059 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-13 23:19:41.243921 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-13 23:19:41.244371 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-13 23:19:41.244976 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-13 23:19:41.345283 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:19:41.345841 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-05-13 23:19:41.347116 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-13 23:19:41.348632 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-13 23:19:41.349584 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-13 23:19:41.350429 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-13 23:19:41.351085 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-13 23:19:41.355399 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-13 23:19:41.356357 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-13 23:19:41.357641 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-13 23:19:41.358184 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-13 23:19:41.359087 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-13 23:19:45.720171 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:19:45.720538 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-13 23:19:45.720931 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-13 23:19:45.722824 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-13 23:19:45.724140 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-13 23:19:45.724712 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-13 23:19:45.733394 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-13 23:19:45.733425 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-05-13 23:19:45.733437 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-13 23:19:45.733917 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-13 23:19:45.733941 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-13 23:19:45.734244 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-13 23:19:45.734995 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:19:45.735589 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-05-13 23:19:45.736077 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-05-13 23:19:45.736885 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-05-13 23:19:45.737576 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-05-13 23:19:45.738065 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-05-13 23:19:45.739437 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-05-13 23:19:45.740002 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-05-13 23:19:45.740346 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-05-13 23:19:45.740847 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-05-13 23:19:45.741338 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-05-13 23:19:45.741888 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-05-13 23:19:45.742477 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-05-13 23:19:45.743154 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-05-13 23:19:45.743331 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-05-13 23:19:45.743954 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-05-13 23:19:45.745000 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-05-13 23:19:45.745637 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-05-13 23:19:45.746146 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-05-13 23:19:45.746871 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-05-13 23:19:45.747768 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-05-13 23:19:45.747789 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-05-13 23:19:45.748199 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-05-13 23:19:45.748681 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-05-13 23:19:45.749084 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-05-13 23:19:45.749530 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-05-13 23:19:45.749989 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-05-13 23:19:45.750390 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-05-13 23:19:45.750833 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-05-13 23:19:45.751223 | orchestrator | 2025-05-13 23:19:45.751703 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-05-13 23:19:45.751963 | orchestrator | Tuesday 13 May 2025 23:19:45 +0000 (0:00:04.952) 0:01:42.890 *********** 2025-05-13 23:19:46.138151 | orchestrator | ok: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-13 23:19:46.232066 | orchestrator | ok: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-13 23:19:46.521547 | orchestrator | ok: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-13 23:19:47.020011 | orchestrator | ok: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-13 23:19:47.020446 | orchestrator | ok: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-13 23:19:47.022255 | orchestrator | ok: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-13 23:19:47.024373 | orchestrator | ok: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-13 23:19:47.024497 | orchestrator | 2025-05-13 23:19:47.025398 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-05-13 23:19:47.029897 | orchestrator | Tuesday 13 May 2025 23:19:47 +0000 (0:00:01.294) 0:01:44.184 *********** 2025-05-13 23:19:47.104744 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-13 23:19:47.209487 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:19:47.209643 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-13 23:19:47.306397 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:19:47.306629 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-13 23:19:47.394846 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:19:47.395228 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-13 23:19:47.478361 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:19:48.078279 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-05-13 23:19:48.079928 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-05-13 23:19:48.081238 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-05-13 23:19:48.082333 | orchestrator | 2025-05-13 23:19:48.082863 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-05-13 23:19:48.083855 | orchestrator | Tuesday 13 May 2025 23:19:48 +0000 (0:00:01.062) 0:01:45.246 *********** 2025-05-13 23:19:48.384974 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-13 23:19:48.483347 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:19:48.483913 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-13 23:19:48.571044 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:19:48.571994 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-13 23:19:48.658263 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:19:48.660700 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-13 23:19:48.748109 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:19:49.750599 | orchestrator | ok: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-05-13 23:19:49.751955 | orchestrator | ok: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-05-13 23:19:49.754106 | orchestrator | ok: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-05-13 23:19:49.756770 | orchestrator | 2025-05-13 23:19:49.757650 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-05-13 23:19:49.758443 | orchestrator | Tuesday 13 May 2025 23:19:49 +0000 (0:00:01.673) 0:01:46.920 *********** 2025-05-13 23:19:50.109244 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:19:50.194260 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:19:50.289364 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:19:50.385518 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:19:50.473147 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:19:51.205793 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:19:51.206322 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:19:51.209245 | orchestrator | 2025-05-13 23:19:51.210778 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-05-13 23:19:51.211804 | orchestrator | Tuesday 13 May 2025 23:19:51 +0000 (0:00:01.451) 0:01:48.372 *********** 2025-05-13 23:19:57.150343 | orchestrator | ok: [testbed-manager] 2025-05-13 23:19:57.150544 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:19:57.152876 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:19:57.152968 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:19:57.152982 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:19:57.152994 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:19:57.153007 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:19:57.153019 | orchestrator | 2025-05-13 23:19:57.153089 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-05-13 23:19:57.153245 | orchestrator | Tuesday 13 May 2025 23:19:57 +0000 (0:00:05.948) 0:01:54.320 *********** 2025-05-13 23:19:57.234347 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-05-13 23:19:57.329795 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:19:57.330816 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-05-13 23:19:57.409047 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:19:57.410114 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-05-13 23:19:57.493287 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:19:57.493771 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-05-13 23:19:57.589896 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:19:57.591118 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-05-13 23:19:57.879071 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:19:57.880567 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-05-13 23:19:58.036844 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:19:58.040743 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-05-13 23:19:58.041417 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:19:58.042427 | orchestrator | 2025-05-13 23:19:58.043572 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-05-13 23:19:58.044701 | orchestrator | Tuesday 13 May 2025 23:19:58 +0000 (0:00:00.880) 0:01:55.201 *********** 2025-05-13 23:19:59.874699 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-05-13 23:19:59.875165 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-05-13 23:19:59.876230 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-05-13 23:19:59.876269 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-05-13 23:19:59.876747 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-05-13 23:19:59.877822 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-05-13 23:19:59.879243 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-05-13 23:19:59.879965 | orchestrator | 2025-05-13 23:19:59.881582 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-05-13 23:19:59.881700 | orchestrator | Tuesday 13 May 2025 23:19:59 +0000 (0:00:01.839) 0:01:57.041 *********** 2025-05-13 23:20:00.867498 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:20:00.868539 | orchestrator | 2025-05-13 23:20:00.869274 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-05-13 23:20:00.871060 | orchestrator | Tuesday 13 May 2025 23:20:00 +0000 (0:00:00.996) 0:01:58.037 *********** 2025-05-13 23:20:02.623993 | orchestrator | ok: [testbed-manager] 2025-05-13 23:20:02.624143 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:20:02.624230 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:20:02.625146 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:20:02.628690 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:20:02.628731 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:20:02.628751 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:20:02.628772 | orchestrator | 2025-05-13 23:20:02.629848 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-05-13 23:20:02.630443 | orchestrator | Tuesday 13 May 2025 23:20:02 +0000 (0:00:01.756) 0:01:59.793 *********** 2025-05-13 23:20:03.114744 | orchestrator | ok: [testbed-manager] 2025-05-13 23:20:03.599418 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:20:03.599515 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:20:03.600434 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:20:03.601398 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:20:03.602388 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:20:03.603115 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:20:03.604064 | orchestrator | 2025-05-13 23:20:03.605206 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-05-13 23:20:03.605938 | orchestrator | Tuesday 13 May 2025 23:20:03 +0000 (0:00:00.972) 0:02:00.766 *********** 2025-05-13 23:20:04.110730 | orchestrator | ok: [testbed-manager] 2025-05-13 23:20:04.193160 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:20:04.847328 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:20:04.847869 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:20:04.850181 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:20:04.850656 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:20:04.852468 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:20:04.853801 | orchestrator | 2025-05-13 23:20:04.854802 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-05-13 23:20:04.855765 | orchestrator | Tuesday 13 May 2025 23:20:04 +0000 (0:00:01.248) 0:02:02.014 *********** 2025-05-13 23:20:05.811860 | orchestrator | ok: [testbed-manager] 2025-05-13 23:20:05.814561 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:20:05.814685 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:20:05.815885 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:20:05.816862 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:20:05.817817 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:20:05.818519 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:20:05.819263 | orchestrator | 2025-05-13 23:20:05.819940 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-05-13 23:20:05.820803 | orchestrator | Tuesday 13 May 2025 23:20:05 +0000 (0:00:00.962) 0:02:02.976 *********** 2025-05-13 23:20:05.977719 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:20:06.264178 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:20:06.346176 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:20:06.421027 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:20:06.507588 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:20:06.621047 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:20:06.622582 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:20:06.623679 | orchestrator | 2025-05-13 23:20:06.625432 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-05-13 23:20:06.627344 | orchestrator | Tuesday 13 May 2025 23:20:06 +0000 (0:00:00.812) 0:02:03.789 *********** 2025-05-13 23:20:08.027562 | orchestrator | ok: [testbed-manager] 2025-05-13 23:20:08.027801 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:20:08.029635 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:20:08.030679 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:20:08.032875 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:20:08.034094 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:20:08.034991 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:20:08.036291 | orchestrator | 2025-05-13 23:20:08.038158 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-05-13 23:20:08.038844 | orchestrator | Tuesday 13 May 2025 23:20:08 +0000 (0:00:01.405) 0:02:05.194 *********** 2025-05-13 23:20:09.711459 | orchestrator | ok: [testbed-manager] 2025-05-13 23:20:09.711564 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:20:09.713071 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:20:09.714145 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:20:09.715239 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:20:09.716557 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:20:09.718131 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:20:09.719106 | orchestrator | 2025-05-13 23:20:09.719876 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-05-13 23:20:09.720790 | orchestrator | Tuesday 13 May 2025 23:20:09 +0000 (0:00:01.680) 0:02:06.875 *********** 2025-05-13 23:20:11.141318 | orchestrator | ok: [testbed-manager] 2025-05-13 23:20:11.142691 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:20:11.142742 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:20:11.144875 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:20:11.146819 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:20:11.147814 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:20:11.148862 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:20:11.149973 | orchestrator | 2025-05-13 23:20:11.150593 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-05-13 23:20:11.151209 | orchestrator | Tuesday 13 May 2025 23:20:11 +0000 (0:00:01.435) 0:02:08.310 *********** 2025-05-13 23:20:11.300510 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:20:11.400751 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:20:11.479649 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:20:11.745870 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:20:11.825444 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:20:11.971414 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:20:11.971685 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:20:11.972589 | orchestrator | 2025-05-13 23:20:11.973828 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-05-13 23:20:11.976573 | orchestrator | Tuesday 13 May 2025 23:20:11 +0000 (0:00:00.827) 0:02:09.138 *********** 2025-05-13 23:20:12.503460 | orchestrator | ok: [testbed-manager] 2025-05-13 23:20:13.612940 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:20:13.613866 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:20:13.617715 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:20:13.618468 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:20:13.619692 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:20:13.620603 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:20:13.624484 | orchestrator | 2025-05-13 23:20:13.625419 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-05-13 23:20:13.626082 | orchestrator | Tuesday 13 May 2025 23:20:13 +0000 (0:00:01.640) 0:02:10.779 *********** 2025-05-13 23:20:14.720176 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:20:14.720904 | orchestrator | 2025-05-13 23:20:14.721910 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-05-13 23:20:14.723237 | orchestrator | Tuesday 13 May 2025 23:20:14 +0000 (0:00:01.107) 0:02:11.887 *********** 2025-05-13 23:20:16.880057 | orchestrator | ok: [testbed-manager] 2025-05-13 23:20:16.882903 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:20:16.885413 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:20:16.885698 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:20:16.886725 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:20:16.887590 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:20:16.888674 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:20:16.889286 | orchestrator | 2025-05-13 23:20:16.890209 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-05-13 23:20:16.891254 | orchestrator | Tuesday 13 May 2025 23:20:16 +0000 (0:00:02.162) 0:02:14.049 *********** 2025-05-13 23:20:18.382234 | orchestrator | ok: [testbed-manager] 2025-05-13 23:20:18.383164 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:20:18.384302 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:20:18.386573 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:20:18.386719 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:20:18.387239 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:20:18.387925 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:20:18.389185 | orchestrator | 2025-05-13 23:20:18.389800 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-05-13 23:20:18.390520 | orchestrator | Tuesday 13 May 2025 23:20:18 +0000 (0:00:01.499) 0:02:15.548 *********** 2025-05-13 23:20:19.133328 | orchestrator | ok: [testbed-manager] 2025-05-13 23:20:20.280308 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:20:20.280862 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:20:20.284842 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:20:20.284968 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:20:20.284985 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:20:20.284996 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:20:20.285007 | orchestrator | 2025-05-13 23:20:20.285307 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-05-13 23:20:20.287104 | orchestrator | Tuesday 13 May 2025 23:20:20 +0000 (0:00:01.898) 0:02:17.446 *********** 2025-05-13 23:20:21.498454 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:20:21.499026 | orchestrator | 2025-05-13 23:20:21.500173 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-05-13 23:20:21.503486 | orchestrator | Tuesday 13 May 2025 23:20:21 +0000 (0:00:01.218) 0:02:18.665 *********** 2025-05-13 23:20:23.598713 | orchestrator | ok: [testbed-manager] 2025-05-13 23:20:23.599293 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:20:23.600497 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:20:23.601075 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:20:23.604738 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:20:23.605906 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:20:23.607030 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:20:23.608358 | orchestrator | 2025-05-13 23:20:23.609107 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-05-13 23:20:23.610354 | orchestrator | Tuesday 13 May 2025 23:20:23 +0000 (0:00:02.100) 0:02:20.766 *********** 2025-05-13 23:20:24.040653 | orchestrator | ok: [testbed-manager] 2025-05-13 23:20:24.541850 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:20:24.542134 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:20:24.542252 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:20:24.543801 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:20:24.545290 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:20:24.546488 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:20:24.547343 | orchestrator | 2025-05-13 23:20:24.547498 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-05-13 23:20:24.548070 | orchestrator | Tuesday 13 May 2025 23:20:24 +0000 (0:00:00.948) 0:02:21.714 *********** 2025-05-13 23:20:26.160006 | orchestrator | ok: [testbed-manager] 2025-05-13 23:20:26.160322 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:20:26.160758 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:20:26.162773 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:20:26.166458 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:20:26.166516 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:20:26.166528 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:20:26.166540 | orchestrator | 2025-05-13 23:20:26.166554 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-05-13 23:20:26.166566 | orchestrator | Tuesday 13 May 2025 23:20:26 +0000 (0:00:01.613) 0:02:23.327 *********** 2025-05-13 23:20:28.067532 | orchestrator | changed: [testbed-manager] 2025-05-13 23:20:28.068262 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:20:28.068564 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:20:28.069477 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:20:28.070118 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:20:28.071064 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:20:28.071940 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:20:28.072216 | orchestrator | 2025-05-13 23:20:28.072776 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-05-13 23:20:28.073334 | orchestrator | Tuesday 13 May 2025 23:20:28 +0000 (0:00:01.907) 0:02:25.235 *********** 2025-05-13 23:20:28.253989 | orchestrator | ok: [testbed-manager] 2025-05-13 23:20:28.331078 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:20:28.417908 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:20:28.500095 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:20:28.586764 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:20:28.712822 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:20:28.713988 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:20:28.715235 | orchestrator | 2025-05-13 23:20:28.717133 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-05-13 23:20:28.717429 | orchestrator | Tuesday 13 May 2025 23:20:28 +0000 (0:00:00.648) 0:02:25.883 *********** 2025-05-13 23:20:28.875454 | orchestrator | ok: [testbed-manager] 2025-05-13 23:20:28.958898 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:20:29.042309 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:20:29.130794 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:20:29.212747 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:20:29.563283 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:20:29.564849 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:20:29.567781 | orchestrator | 2025-05-13 23:20:29.567832 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-05-13 23:20:29.567847 | orchestrator | Tuesday 13 May 2025 23:20:29 +0000 (0:00:00.847) 0:02:26.731 *********** 2025-05-13 23:20:29.727993 | orchestrator | ok: [testbed-manager] 2025-05-13 23:20:29.820513 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:20:29.906780 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:20:29.990486 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:20:30.086977 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:20:30.231667 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:20:30.231887 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:20:30.232789 | orchestrator | 2025-05-13 23:20:30.233793 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-05-13 23:20:30.234182 | orchestrator | Tuesday 13 May 2025 23:20:30 +0000 (0:00:00.667) 0:02:27.399 *********** 2025-05-13 23:20:36.182062 | orchestrator | ok: [testbed-manager] 2025-05-13 23:20:36.185721 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:20:36.185767 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:20:36.185779 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:20:36.185791 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:20:36.188162 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:20:36.188309 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:20:36.189270 | orchestrator | 2025-05-13 23:20:36.189927 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-05-13 23:20:36.190840 | orchestrator | Tuesday 13 May 2025 23:20:36 +0000 (0:00:05.948) 0:02:33.348 *********** 2025-05-13 23:20:37.261188 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:20:37.262514 | orchestrator | 2025-05-13 23:20:37.265029 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-05-13 23:20:37.265999 | orchestrator | Tuesday 13 May 2025 23:20:37 +0000 (0:00:01.078) 0:02:34.426 *********** 2025-05-13 23:20:37.348333 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-05-13 23:20:37.348966 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-05-13 23:20:37.636305 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:20:37.636859 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-05-13 23:20:37.638006 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-05-13 23:20:37.741093 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:20:37.741611 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-05-13 23:20:37.742762 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-05-13 23:20:37.857377 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:20:37.858202 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-05-13 23:20:37.858945 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-05-13 23:20:37.947444 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:20:37.948268 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-05-13 23:20:37.948979 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-05-13 23:20:38.044525 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:20:38.045768 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-05-13 23:20:38.046448 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-05-13 23:20:38.184077 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:20:38.185287 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-05-13 23:20:38.186250 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-05-13 23:20:38.187985 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:20:38.188561 | orchestrator | 2025-05-13 23:20:38.189391 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-05-13 23:20:38.190232 | orchestrator | Tuesday 13 May 2025 23:20:38 +0000 (0:00:00.924) 0:02:35.351 *********** 2025-05-13 23:20:39.438890 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:20:39.443046 | orchestrator | 2025-05-13 23:20:39.443127 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-05-13 23:20:39.443143 | orchestrator | Tuesday 13 May 2025 23:20:39 +0000 (0:00:01.254) 0:02:36.605 *********** 2025-05-13 23:20:39.528929 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-05-13 23:20:39.617039 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:20:39.617861 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-05-13 23:20:39.700143 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:20:39.702245 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-05-13 23:20:39.786335 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:20:39.786805 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-05-13 23:20:39.873447 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:20:39.873696 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-05-13 23:20:39.960270 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:20:39.960568 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-05-13 23:20:40.152137 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:20:40.154679 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-05-13 23:20:40.155904 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:20:40.157377 | orchestrator | 2025-05-13 23:20:40.158099 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-05-13 23:20:40.158916 | orchestrator | Tuesday 13 May 2025 23:20:40 +0000 (0:00:00.715) 0:02:37.320 *********** 2025-05-13 23:20:41.380767 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:20:41.381882 | orchestrator | 2025-05-13 23:20:41.384087 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-05-13 23:20:41.385415 | orchestrator | Tuesday 13 May 2025 23:20:41 +0000 (0:00:01.223) 0:02:38.544 *********** 2025-05-13 23:20:43.070976 | orchestrator | ok: [testbed-manager] 2025-05-13 23:20:43.071096 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:20:43.071113 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:20:43.071125 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:20:43.071136 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:20:43.071220 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:20:43.072488 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:20:43.073633 | orchestrator | 2025-05-13 23:20:43.074839 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-05-13 23:20:43.075727 | orchestrator | Tuesday 13 May 2025 23:20:43 +0000 (0:00:01.691) 0:02:40.236 *********** 2025-05-13 23:20:44.565513 | orchestrator | ok: [testbed-manager] 2025-05-13 23:20:44.566602 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:20:44.568106 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:20:44.568991 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:20:44.570439 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:20:44.571040 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:20:44.571388 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:20:44.572279 | orchestrator | 2025-05-13 23:20:44.572934 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-05-13 23:20:44.573917 | orchestrator | Tuesday 13 May 2025 23:20:44 +0000 (0:00:01.498) 0:02:41.734 *********** 2025-05-13 23:20:46.234977 | orchestrator | ok: [testbed-manager] 2025-05-13 23:20:46.235819 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:20:46.236695 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:20:46.238787 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:20:46.238861 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:20:46.238874 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:20:46.239437 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:20:46.240930 | orchestrator | 2025-05-13 23:20:46.242145 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-05-13 23:20:46.242888 | orchestrator | Tuesday 13 May 2025 23:20:46 +0000 (0:00:01.665) 0:02:43.400 *********** 2025-05-13 23:20:48.180144 | orchestrator | ok: [testbed-manager] 2025-05-13 23:20:48.180454 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:20:48.182438 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:20:48.184756 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:20:48.186745 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:20:48.187931 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:20:48.189338 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:20:48.190474 | orchestrator | 2025-05-13 23:20:48.191332 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-05-13 23:20:48.192133 | orchestrator | Tuesday 13 May 2025 23:20:48 +0000 (0:00:01.944) 0:02:45.345 *********** 2025-05-13 23:20:50.678482 | orchestrator | ok: [testbed-manager] 2025-05-13 23:20:50.678586 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:20:50.679637 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:20:50.680167 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:20:50.680775 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:20:50.681112 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:20:50.681685 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:20:50.681847 | orchestrator | 2025-05-13 23:20:50.682294 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-05-13 23:20:50.683032 | orchestrator | Tuesday 13 May 2025 23:20:50 +0000 (0:00:02.497) 0:02:47.842 *********** 2025-05-13 23:20:51.722829 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:20:51.723076 | orchestrator | 2025-05-13 23:20:51.724151 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-05-13 23:20:51.727471 | orchestrator | Tuesday 13 May 2025 23:20:51 +0000 (0:00:01.047) 0:02:48.889 *********** 2025-05-13 23:20:52.193724 | orchestrator | ok: [testbed-manager] 2025-05-13 23:20:52.290774 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:20:53.358164 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:20:53.358804 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:20:53.359236 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:20:53.360044 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:20:53.362164 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:20:53.365520 | orchestrator | 2025-05-13 23:20:53.366696 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-05-13 23:20:53.367169 | orchestrator | Tuesday 13 May 2025 23:20:53 +0000 (0:00:01.633) 0:02:50.523 *********** 2025-05-13 23:20:55.637518 | orchestrator | ok: [testbed-manager] 2025-05-13 23:20:55.638126 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:20:55.638285 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:20:55.638953 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:20:55.639578 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:20:55.640088 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:20:55.640881 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:20:55.641517 | orchestrator | 2025-05-13 23:20:55.641915 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-05-13 23:20:55.642396 | orchestrator | Tuesday 13 May 2025 23:20:55 +0000 (0:00:02.277) 0:02:52.800 *********** 2025-05-13 23:20:56.761574 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:20:56.762396 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:20:56.763401 | orchestrator | ok: [testbed-manager] 2025-05-13 23:20:56.764029 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:20:56.765504 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:20:56.766842 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:20:56.767655 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:20:56.768894 | orchestrator | 2025-05-13 23:20:56.769853 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-05-13 23:20:56.770845 | orchestrator | Tuesday 13 May 2025 23:20:56 +0000 (0:00:01.130) 0:02:53.930 *********** 2025-05-13 23:20:56.927010 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:20:57.006795 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:20:57.320268 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:20:57.404486 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:20:57.482893 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:20:57.600214 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:20:57.600380 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:20:57.600778 | orchestrator | 2025-05-13 23:20:57.601129 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-05-13 23:20:57.601795 | orchestrator | Tuesday 13 May 2025 23:20:57 +0000 (0:00:00.838) 0:02:54.768 *********** 2025-05-13 23:20:57.766837 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:20:57.840577 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:20:57.919089 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:20:58.002119 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:20:58.114172 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:20:58.813362 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:20:58.813860 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:20:58.814105 | orchestrator | 2025-05-13 23:20:58.814821 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-05-13 23:20:58.818510 | orchestrator | Tuesday 13 May 2025 23:20:58 +0000 (0:00:01.211) 0:02:55.980 *********** 2025-05-13 23:20:58.982500 | orchestrator | ok: [testbed-manager] 2025-05-13 23:20:59.065761 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:20:59.153057 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:20:59.250362 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:20:59.339477 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:20:59.471103 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:20:59.471704 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:20:59.473191 | orchestrator | 2025-05-13 23:20:59.473795 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-05-13 23:20:59.474969 | orchestrator | Tuesday 13 May 2025 23:20:59 +0000 (0:00:00.657) 0:02:56.638 *********** 2025-05-13 23:20:59.628684 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:20:59.703365 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:20:59.788146 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:20:59.866382 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:21:00.139504 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:21:00.257884 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:21:00.258692 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:21:00.260893 | orchestrator | 2025-05-13 23:21:00.262439 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-05-13 23:21:00.263483 | orchestrator | Tuesday 13 May 2025 23:21:00 +0000 (0:00:00.786) 0:02:57.424 *********** 2025-05-13 23:21:00.426684 | orchestrator | ok: [testbed-manager] 2025-05-13 23:21:00.511530 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:21:00.592775 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:21:00.672428 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:21:00.756719 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:21:00.900836 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:21:00.901951 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:21:00.902860 | orchestrator | 2025-05-13 23:21:00.903808 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-05-13 23:21:00.904740 | orchestrator | Tuesday 13 May 2025 23:21:00 +0000 (0:00:00.642) 0:02:58.067 *********** 2025-05-13 23:21:01.057184 | orchestrator | ok: [testbed-manager] => { 2025-05-13 23:21:01.058115 | orchestrator |  "docker_version": "5:27.5.1" 2025-05-13 23:21:01.059559 | orchestrator | } 2025-05-13 23:21:01.144782 | orchestrator | ok: [testbed-node-0] => { 2025-05-13 23:21:01.145371 | orchestrator |  "docker_version": "5:27.5.1" 2025-05-13 23:21:01.146450 | orchestrator | } 2025-05-13 23:21:01.220980 | orchestrator | ok: [testbed-node-1] => { 2025-05-13 23:21:01.221166 | orchestrator |  "docker_version": "5:27.5.1" 2025-05-13 23:21:01.221673 | orchestrator | } 2025-05-13 23:21:01.305223 | orchestrator | ok: [testbed-node-2] => { 2025-05-13 23:21:01.305904 | orchestrator |  "docker_version": "5:27.5.1" 2025-05-13 23:21:01.306818 | orchestrator | } 2025-05-13 23:21:01.386742 | orchestrator | ok: [testbed-node-3] => { 2025-05-13 23:21:01.387894 | orchestrator |  "docker_version": "5:27.5.1" 2025-05-13 23:21:01.389307 | orchestrator | } 2025-05-13 23:21:01.696356 | orchestrator | ok: [testbed-node-4] => { 2025-05-13 23:21:01.696525 | orchestrator |  "docker_version": "5:27.5.1" 2025-05-13 23:21:01.697750 | orchestrator | } 2025-05-13 23:21:01.699126 | orchestrator | ok: [testbed-node-5] => { 2025-05-13 23:21:01.700361 | orchestrator |  "docker_version": "5:27.5.1" 2025-05-13 23:21:01.701908 | orchestrator | } 2025-05-13 23:21:01.702732 | orchestrator | 2025-05-13 23:21:01.704364 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-05-13 23:21:01.705489 | orchestrator | Tuesday 13 May 2025 23:21:01 +0000 (0:00:00.797) 0:02:58.865 *********** 2025-05-13 23:21:01.861842 | orchestrator | ok: [testbed-manager] => { 2025-05-13 23:21:01.863431 | orchestrator |  "docker_cli_version": "5:27.5.1" 2025-05-13 23:21:01.865308 | orchestrator | } 2025-05-13 23:21:01.961346 | orchestrator | ok: [testbed-node-0] => { 2025-05-13 23:21:01.961870 | orchestrator |  "docker_cli_version": "5:27.5.1" 2025-05-13 23:21:01.962991 | orchestrator | } 2025-05-13 23:21:02.037877 | orchestrator | ok: [testbed-node-1] => { 2025-05-13 23:21:02.038102 | orchestrator |  "docker_cli_version": "5:27.5.1" 2025-05-13 23:21:02.038510 | orchestrator | } 2025-05-13 23:21:02.121426 | orchestrator | ok: [testbed-node-2] => { 2025-05-13 23:21:02.122205 | orchestrator |  "docker_cli_version": "5:27.5.1" 2025-05-13 23:21:02.122891 | orchestrator | } 2025-05-13 23:21:02.202373 | orchestrator | ok: [testbed-node-3] => { 2025-05-13 23:21:02.202589 | orchestrator |  "docker_cli_version": "5:27.5.1" 2025-05-13 23:21:02.203877 | orchestrator | } 2025-05-13 23:21:02.332256 | orchestrator | ok: [testbed-node-4] => { 2025-05-13 23:21:02.332427 | orchestrator |  "docker_cli_version": "5:27.5.1" 2025-05-13 23:21:02.332827 | orchestrator | } 2025-05-13 23:21:02.333352 | orchestrator | ok: [testbed-node-5] => { 2025-05-13 23:21:02.334322 | orchestrator |  "docker_cli_version": "5:27.5.1" 2025-05-13 23:21:02.334801 | orchestrator | } 2025-05-13 23:21:02.335591 | orchestrator | 2025-05-13 23:21:02.337183 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-05-13 23:21:02.338559 | orchestrator | Tuesday 13 May 2025 23:21:02 +0000 (0:00:00.635) 0:02:59.500 *********** 2025-05-13 23:21:02.492258 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:21:02.582426 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:21:02.664705 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:21:02.738471 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:21:02.819975 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:21:02.956741 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:21:02.959230 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:21:02.959285 | orchestrator | 2025-05-13 23:21:02.959299 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-05-13 23:21:02.960146 | orchestrator | Tuesday 13 May 2025 23:21:02 +0000 (0:00:00.623) 0:03:00.124 *********** 2025-05-13 23:21:03.329121 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:21:03.410306 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:21:03.493181 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:21:03.571318 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:21:03.655167 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:21:03.775836 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:21:03.776929 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:21:03.777656 | orchestrator | 2025-05-13 23:21:03.778911 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-05-13 23:21:03.779646 | orchestrator | Tuesday 13 May 2025 23:21:03 +0000 (0:00:00.819) 0:03:00.944 *********** 2025-05-13 23:21:05.018157 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:21:05.019868 | orchestrator | 2025-05-13 23:21:05.020226 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-05-13 23:21:05.022292 | orchestrator | Tuesday 13 May 2025 23:21:05 +0000 (0:00:01.240) 0:03:02.184 *********** 2025-05-13 23:21:06.174447 | orchestrator | ok: [testbed-manager] 2025-05-13 23:21:06.174609 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:21:06.174850 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:21:06.175504 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:21:06.176461 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:21:06.178005 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:21:06.178579 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:21:06.179184 | orchestrator | 2025-05-13 23:21:06.179705 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-05-13 23:21:06.180306 | orchestrator | Tuesday 13 May 2025 23:21:06 +0000 (0:00:01.160) 0:03:03.344 *********** 2025-05-13 23:21:09.286524 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:21:09.287126 | orchestrator | ok: [testbed-manager] 2025-05-13 23:21:09.288722 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:21:09.289602 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:21:09.290716 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:21:09.291336 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:21:09.291908 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:21:09.292947 | orchestrator | 2025-05-13 23:21:09.297368 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-05-13 23:21:09.297979 | orchestrator | Tuesday 13 May 2025 23:21:09 +0000 (0:00:03.109) 0:03:06.454 *********** 2025-05-13 23:21:09.372752 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-05-13 23:21:09.372844 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-05-13 23:21:09.451465 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-05-13 23:21:09.451996 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-05-13 23:21:09.452880 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-05-13 23:21:09.453567 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-05-13 23:21:09.525242 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:21:09.525915 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-05-13 23:21:09.526915 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-05-13 23:21:09.527537 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-05-13 23:21:09.617361 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:21:09.618294 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-05-13 23:21:09.619740 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-05-13 23:21:09.621106 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-05-13 23:21:09.690935 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:21:09.691154 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-05-13 23:21:09.692450 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-05-13 23:21:09.692944 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-05-13 23:21:09.759035 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:21:09.760020 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-05-13 23:21:09.761149 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-05-13 23:21:09.763429 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-05-13 23:21:09.909374 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:21:09.909547 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:21:09.911818 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-05-13 23:21:09.913210 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-05-13 23:21:09.914470 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-05-13 23:21:09.915188 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:21:09.916307 | orchestrator | 2025-05-13 23:21:09.917044 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-05-13 23:21:09.918099 | orchestrator | Tuesday 13 May 2025 23:21:09 +0000 (0:00:00.624) 0:03:07.078 *********** 2025-05-13 23:21:12.039383 | orchestrator | ok: [testbed-manager] 2025-05-13 23:21:12.040515 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:21:12.042326 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:21:12.045047 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:21:12.046646 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:21:12.048158 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:21:12.048261 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:21:12.049729 | orchestrator | 2025-05-13 23:21:12.050775 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-05-13 23:21:12.051589 | orchestrator | Tuesday 13 May 2025 23:21:12 +0000 (0:00:02.128) 0:03:09.206 *********** 2025-05-13 23:21:13.110994 | orchestrator | ok: [testbed-manager] 2025-05-13 23:21:13.111522 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:21:13.113054 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:21:13.115893 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:21:13.116250 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:21:13.118539 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:21:13.118885 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:21:13.119557 | orchestrator | 2025-05-13 23:21:13.120125 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-05-13 23:21:13.121064 | orchestrator | Tuesday 13 May 2025 23:21:13 +0000 (0:00:01.072) 0:03:10.278 *********** 2025-05-13 23:21:14.166323 | orchestrator | ok: [testbed-manager] 2025-05-13 23:21:14.166460 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:21:14.166966 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:21:14.167524 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:21:14.168368 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:21:14.168950 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:21:14.169521 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:21:14.170047 | orchestrator | 2025-05-13 23:21:14.170467 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-05-13 23:21:14.170796 | orchestrator | Tuesday 13 May 2025 23:21:14 +0000 (0:00:01.057) 0:03:11.336 *********** 2025-05-13 23:21:17.462558 | orchestrator | changed: [testbed-manager] 2025-05-13 23:21:17.462720 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:21:17.462809 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:21:17.465393 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:21:17.465479 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:21:17.468230 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:21:17.468360 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:21:17.468767 | orchestrator | 2025-05-13 23:21:17.469096 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-05-13 23:21:17.469258 | orchestrator | Tuesday 13 May 2025 23:21:17 +0000 (0:00:03.291) 0:03:14.627 *********** 2025-05-13 23:21:18.764077 | orchestrator | ok: [testbed-manager] 2025-05-13 23:21:18.764234 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:21:18.764320 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:21:18.765505 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:21:18.766136 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:21:18.767259 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:21:18.767298 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:21:18.768575 | orchestrator | 2025-05-13 23:21:18.769685 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-05-13 23:21:18.770178 | orchestrator | Tuesday 13 May 2025 23:21:18 +0000 (0:00:01.303) 0:03:15.931 *********** 2025-05-13 23:21:20.316109 | orchestrator | ok: [testbed-manager] 2025-05-13 23:21:20.316758 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:21:20.317738 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:21:20.318880 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:21:20.320017 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:21:20.320340 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:21:20.321080 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:21:20.322205 | orchestrator | 2025-05-13 23:21:20.323599 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-05-13 23:21:20.324584 | orchestrator | Tuesday 13 May 2025 23:21:20 +0000 (0:00:01.552) 0:03:17.484 *********** 2025-05-13 23:21:21.295010 | orchestrator | changed: [testbed-manager] 2025-05-13 23:21:21.295283 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:21:21.296043 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:21:21.296436 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:21:21.297585 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:21:21.297609 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:21:21.298146 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:21:21.298977 | orchestrator | 2025-05-13 23:21:21.299348 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-05-13 23:21:21.299852 | orchestrator | Tuesday 13 May 2025 23:21:21 +0000 (0:00:00.977) 0:03:18.461 *********** 2025-05-13 23:21:23.315304 | orchestrator | ok: [testbed-manager] 2025-05-13 23:21:23.316059 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:21:23.318420 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:21:23.319407 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:21:23.321044 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:21:23.321546 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:21:23.322717 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:21:23.323832 | orchestrator | 2025-05-13 23:21:23.324351 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-05-13 23:21:23.325414 | orchestrator | Tuesday 13 May 2025 23:21:23 +0000 (0:00:02.020) 0:03:20.482 *********** 2025-05-13 23:21:24.182549 | orchestrator | changed: [testbed-manager] 2025-05-13 23:21:24.184036 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:21:24.184952 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:21:24.185044 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:21:24.185541 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:21:24.185979 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:21:24.186204 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:21:24.186563 | orchestrator | 2025-05-13 23:21:24.186849 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-05-13 23:21:24.187377 | orchestrator | Tuesday 13 May 2025 23:21:24 +0000 (0:00:00.871) 0:03:21.353 *********** 2025-05-13 23:21:26.449972 | orchestrator | ok: [testbed-manager] 2025-05-13 23:21:26.450877 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:21:26.452188 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:21:26.452226 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:21:26.453553 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:21:26.453668 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:21:26.455123 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:21:26.455149 | orchestrator | 2025-05-13 23:21:26.456296 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-05-13 23:21:26.456321 | orchestrator | Tuesday 13 May 2025 23:21:26 +0000 (0:00:02.262) 0:03:23.615 *********** 2025-05-13 23:21:28.370072 | orchestrator | ok: [testbed-manager] 2025-05-13 23:21:28.370746 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:21:28.371970 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:21:28.373358 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:21:28.377579 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:21:28.377617 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:21:28.377655 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:21:28.377667 | orchestrator | 2025-05-13 23:21:28.378835 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-05-13 23:21:28.379877 | orchestrator | Tuesday 13 May 2025 23:21:28 +0000 (0:00:01.919) 0:03:25.535 *********** 2025-05-13 23:21:28.767352 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-05-13 23:21:29.649066 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-05-13 23:21:29.650750 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-05-13 23:21:29.652517 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-05-13 23:21:29.652833 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-05-13 23:21:29.655389 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-05-13 23:21:29.656818 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-05-13 23:21:29.657903 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-05-13 23:21:29.659397 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-05-13 23:21:29.660932 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-05-13 23:21:29.661829 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-05-13 23:21:29.662930 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-05-13 23:21:29.663949 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-05-13 23:21:29.664560 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-05-13 23:21:29.665490 | orchestrator | 2025-05-13 23:21:29.667108 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-05-13 23:21:29.668022 | orchestrator | Tuesday 13 May 2025 23:21:29 +0000 (0:00:01.280) 0:03:26.815 *********** 2025-05-13 23:21:29.791510 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:21:29.859726 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:21:29.944144 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:21:30.010563 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:21:30.090812 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:21:30.223545 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:21:30.223750 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:21:30.224336 | orchestrator | 2025-05-13 23:21:30.225215 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-05-13 23:21:30.225827 | orchestrator | Tuesday 13 May 2025 23:21:30 +0000 (0:00:00.578) 0:03:27.394 *********** 2025-05-13 23:21:33.951397 | orchestrator | ok: [testbed-manager] 2025-05-13 23:21:33.953184 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:21:33.954553 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:21:33.955542 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:21:33.958440 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:21:33.959480 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:21:33.960420 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:21:33.960969 | orchestrator | 2025-05-13 23:21:33.961420 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-05-13 23:21:33.962182 | orchestrator | Tuesday 13 May 2025 23:21:33 +0000 (0:00:03.722) 0:03:31.116 *********** 2025-05-13 23:21:34.091607 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:21:34.160748 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:21:34.227580 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:21:34.484581 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:21:34.551585 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:21:34.649741 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:21:34.650979 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:21:34.651880 | orchestrator | 2025-05-13 23:21:34.652306 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-05-13 23:21:34.653516 | orchestrator | Tuesday 13 May 2025 23:21:34 +0000 (0:00:00.702) 0:03:31.818 *********** 2025-05-13 23:21:34.745064 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-05-13 23:21:34.746696 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-05-13 23:21:34.826948 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:21:34.827220 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-05-13 23:21:34.828399 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-05-13 23:21:34.897057 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:21:34.898763 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-05-13 23:21:34.899807 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-05-13 23:21:34.978548 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:21:34.979102 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-05-13 23:21:34.980620 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-05-13 23:21:35.052462 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:21:35.052874 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-05-13 23:21:35.053050 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-05-13 23:21:35.136598 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:21:35.136843 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-05-13 23:21:35.137190 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-05-13 23:21:35.265326 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:21:35.266839 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-05-13 23:21:35.268722 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-05-13 23:21:35.270075 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:21:35.271467 | orchestrator | 2025-05-13 23:21:35.272850 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-05-13 23:21:35.274448 | orchestrator | Tuesday 13 May 2025 23:21:35 +0000 (0:00:00.617) 0:03:32.436 *********** 2025-05-13 23:21:35.397856 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:21:35.469816 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:21:35.535399 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:21:35.598596 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:21:35.672882 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:21:35.781903 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:21:35.782383 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:21:35.784459 | orchestrator | 2025-05-13 23:21:35.784816 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-05-13 23:21:35.786252 | orchestrator | Tuesday 13 May 2025 23:21:35 +0000 (0:00:00.513) 0:03:32.949 *********** 2025-05-13 23:21:35.932118 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:21:35.999840 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:21:36.081571 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:21:36.176692 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:21:36.247113 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:21:36.355252 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:21:36.355801 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:21:36.357114 | orchestrator | 2025-05-13 23:21:36.358515 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-05-13 23:21:36.359239 | orchestrator | Tuesday 13 May 2025 23:21:36 +0000 (0:00:00.574) 0:03:33.523 *********** 2025-05-13 23:21:36.496944 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:21:36.563347 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:21:36.633041 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:21:36.701277 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:21:36.767725 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:21:36.893245 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:21:36.893776 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:21:36.894884 | orchestrator | 2025-05-13 23:21:36.895795 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-05-13 23:21:36.896287 | orchestrator | Tuesday 13 May 2025 23:21:36 +0000 (0:00:00.539) 0:03:34.063 *********** 2025-05-13 23:21:38.541798 | orchestrator | ok: [testbed-manager] 2025-05-13 23:21:38.542004 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:21:38.542823 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:21:38.544295 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:21:38.545413 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:21:38.546202 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:21:38.546395 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:21:38.547018 | orchestrator | 2025-05-13 23:21:38.547709 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-05-13 23:21:38.548859 | orchestrator | Tuesday 13 May 2025 23:21:38 +0000 (0:00:01.644) 0:03:35.708 *********** 2025-05-13 23:21:39.457481 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:21:39.458727 | orchestrator | 2025-05-13 23:21:39.459678 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-05-13 23:21:39.460488 | orchestrator | Tuesday 13 May 2025 23:21:39 +0000 (0:00:00.917) 0:03:36.626 *********** 2025-05-13 23:21:39.878218 | orchestrator | ok: [testbed-manager] 2025-05-13 23:21:40.317791 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:21:40.318167 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:21:40.318559 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:21:40.319434 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:21:40.320733 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:21:40.321693 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:21:40.321885 | orchestrator | 2025-05-13 23:21:40.323123 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-05-13 23:21:40.323892 | orchestrator | Tuesday 13 May 2025 23:21:40 +0000 (0:00:00.859) 0:03:37.485 *********** 2025-05-13 23:21:40.827254 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:21:41.256695 | orchestrator | ok: [testbed-manager] 2025-05-13 23:21:41.258134 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:21:41.258873 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:21:41.259753 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:21:41.260322 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:21:41.261041 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:21:41.263336 | orchestrator | 2025-05-13 23:21:41.263936 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-05-13 23:21:41.264780 | orchestrator | Tuesday 13 May 2025 23:21:41 +0000 (0:00:00.940) 0:03:38.425 *********** 2025-05-13 23:21:42.715789 | orchestrator | ok: [testbed-manager] 2025-05-13 23:21:42.716946 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:21:42.718079 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:21:42.718785 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:21:42.719905 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:21:42.720510 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:21:42.721176 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:21:42.721831 | orchestrator | 2025-05-13 23:21:42.722467 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-05-13 23:21:42.723441 | orchestrator | Tuesday 13 May 2025 23:21:42 +0000 (0:00:01.458) 0:03:39.884 *********** 2025-05-13 23:21:42.836960 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:21:42.956330 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:21:43.015050 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:21:43.077700 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:21:43.691687 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:21:43.691787 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:21:43.693376 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:21:43.694622 | orchestrator | 2025-05-13 23:21:43.695120 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-05-13 23:21:43.695595 | orchestrator | Tuesday 13 May 2025 23:21:43 +0000 (0:00:00.976) 0:03:40.860 *********** 2025-05-13 23:21:45.048099 | orchestrator | ok: [testbed-manager] 2025-05-13 23:21:45.048380 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:21:45.048576 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:21:45.050174 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:21:45.050797 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:21:45.050831 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:21:45.051367 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:21:45.052130 | orchestrator | 2025-05-13 23:21:45.052727 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-05-13 23:21:45.053178 | orchestrator | Tuesday 13 May 2025 23:21:45 +0000 (0:00:01.355) 0:03:42.215 *********** 2025-05-13 23:21:46.424586 | orchestrator | ok: [testbed-manager] 2025-05-13 23:21:46.425148 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:21:46.426618 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:21:46.427677 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:21:46.428556 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:21:46.429315 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:21:46.429338 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:21:46.429967 | orchestrator | 2025-05-13 23:21:46.430495 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-05-13 23:21:46.431684 | orchestrator | Tuesday 13 May 2025 23:21:46 +0000 (0:00:01.378) 0:03:43.594 *********** 2025-05-13 23:21:47.537267 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:21:47.537840 | orchestrator | 2025-05-13 23:21:47.538934 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-05-13 23:21:47.540082 | orchestrator | Tuesday 13 May 2025 23:21:47 +0000 (0:00:01.111) 0:03:44.705 *********** 2025-05-13 23:21:48.990183 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:21:48.990973 | orchestrator | ok: [testbed-manager] 2025-05-13 23:21:48.992306 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:21:48.994328 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:21:48.994933 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:21:48.996271 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:21:48.997832 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:21:48.998800 | orchestrator | 2025-05-13 23:21:49.000391 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-05-13 23:21:49.000786 | orchestrator | Tuesday 13 May 2025 23:21:48 +0000 (0:00:01.451) 0:03:46.156 *********** 2025-05-13 23:21:50.184219 | orchestrator | ok: [testbed-manager] 2025-05-13 23:21:50.184681 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:21:50.184955 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:21:50.185950 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:21:50.186770 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:21:50.187304 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:21:50.191856 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:21:50.192117 | orchestrator | 2025-05-13 23:21:50.193027 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-05-13 23:21:50.193741 | orchestrator | Tuesday 13 May 2025 23:21:50 +0000 (0:00:01.193) 0:03:47.350 *********** 2025-05-13 23:21:51.335157 | orchestrator | ok: [testbed-manager] 2025-05-13 23:21:51.335266 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:21:51.335898 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:21:51.337355 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:21:51.337865 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:21:51.339036 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:21:51.340330 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:21:51.341424 | orchestrator | 2025-05-13 23:21:51.341837 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-05-13 23:21:51.348046 | orchestrator | Tuesday 13 May 2025 23:21:51 +0000 (0:00:01.152) 0:03:48.503 *********** 2025-05-13 23:21:52.700728 | orchestrator | ok: [testbed-manager] 2025-05-13 23:21:52.701241 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:21:52.702423 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:21:52.702923 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:21:52.706616 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:21:52.706668 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:21:52.706681 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:21:52.706810 | orchestrator | 2025-05-13 23:21:52.707805 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-05-13 23:21:52.708615 | orchestrator | Tuesday 13 May 2025 23:21:52 +0000 (0:00:01.365) 0:03:49.868 *********** 2025-05-13 23:21:53.946472 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:21:53.947474 | orchestrator | 2025-05-13 23:21:53.948816 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-13 23:21:53.951466 | orchestrator | Tuesday 13 May 2025 23:21:53 +0000 (0:00:00.960) 0:03:50.829 *********** 2025-05-13 23:21:53.951497 | orchestrator | 2025-05-13 23:21:53.951510 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-13 23:21:53.952124 | orchestrator | Tuesday 13 May 2025 23:21:53 +0000 (0:00:00.039) 0:03:50.868 *********** 2025-05-13 23:21:53.952829 | orchestrator | 2025-05-13 23:21:53.954094 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-13 23:21:53.955298 | orchestrator | Tuesday 13 May 2025 23:21:53 +0000 (0:00:00.046) 0:03:50.915 *********** 2025-05-13 23:21:53.956004 | orchestrator | 2025-05-13 23:21:53.956347 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-13 23:21:53.956775 | orchestrator | Tuesday 13 May 2025 23:21:53 +0000 (0:00:00.038) 0:03:50.953 *********** 2025-05-13 23:21:53.957209 | orchestrator | 2025-05-13 23:21:53.957607 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-13 23:21:53.957973 | orchestrator | Tuesday 13 May 2025 23:21:53 +0000 (0:00:00.039) 0:03:50.993 *********** 2025-05-13 23:21:53.958517 | orchestrator | 2025-05-13 23:21:53.958860 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-13 23:21:53.959248 | orchestrator | Tuesday 13 May 2025 23:21:53 +0000 (0:00:00.046) 0:03:51.039 *********** 2025-05-13 23:21:53.959703 | orchestrator | 2025-05-13 23:21:53.960107 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-13 23:21:53.960494 | orchestrator | Tuesday 13 May 2025 23:21:53 +0000 (0:00:00.038) 0:03:51.078 *********** 2025-05-13 23:21:53.960888 | orchestrator | 2025-05-13 23:21:53.961276 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-05-13 23:21:53.961795 | orchestrator | Tuesday 13 May 2025 23:21:53 +0000 (0:00:00.038) 0:03:51.117 *********** 2025-05-13 23:21:55.772846 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:21:55.773214 | orchestrator | 2025-05-13 23:21:55.774284 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-05-13 23:21:55.774788 | orchestrator | Tuesday 13 May 2025 23:21:55 +0000 (0:00:01.822) 0:03:52.940 *********** 2025-05-13 23:21:55.885323 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:21:55.886236 | orchestrator | 2025-05-13 23:21:55.887232 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-05-13 23:21:55.888834 | orchestrator | Tuesday 13 May 2025 23:21:55 +0000 (0:00:00.113) 0:03:53.053 *********** 2025-05-13 23:21:57.082899 | orchestrator | ok: [testbed-manager] 2025-05-13 23:21:57.083023 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:21:57.083101 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:21:57.083194 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:21:57.083481 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:21:57.084016 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:21:57.084320 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:21:57.087822 | orchestrator | 2025-05-13 23:21:57.088283 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-05-13 23:21:57.088788 | orchestrator | Tuesday 13 May 2025 23:21:57 +0000 (0:00:01.198) 0:03:54.252 *********** 2025-05-13 23:21:57.265622 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:21:57.349983 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:21:57.437422 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:21:57.502282 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:21:57.572694 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:21:57.699776 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:21:57.700525 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:21:57.702110 | orchestrator | 2025-05-13 23:21:57.706716 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-05-13 23:21:57.706947 | orchestrator | Tuesday 13 May 2025 23:21:57 +0000 (0:00:00.618) 0:03:54.870 *********** 2025-05-13 23:21:58.659603 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:21:58.661685 | orchestrator | 2025-05-13 23:21:58.663724 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-05-13 23:21:58.665685 | orchestrator | Tuesday 13 May 2025 23:21:58 +0000 (0:00:00.956) 0:03:55.826 *********** 2025-05-13 23:21:59.135484 | orchestrator | ok: [testbed-manager] 2025-05-13 23:21:59.563990 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:21:59.564093 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:21:59.564107 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:21:59.564120 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:21:59.565262 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:21:59.566175 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:21:59.567836 | orchestrator | 2025-05-13 23:21:59.568452 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-05-13 23:21:59.569198 | orchestrator | Tuesday 13 May 2025 23:21:59 +0000 (0:00:00.900) 0:03:56.727 *********** 2025-05-13 23:22:00.294701 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-05-13 23:22:02.278391 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-05-13 23:22:02.279357 | orchestrator | ok: [testbed-node-1] => (item=docker_containers) 2025-05-13 23:22:02.281784 | orchestrator | ok: [testbed-node-2] => (item=docker_containers) 2025-05-13 23:22:02.282786 | orchestrator | ok: [testbed-node-3] => (item=docker_containers) 2025-05-13 23:22:02.283846 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-05-13 23:22:02.284891 | orchestrator | ok: [testbed-node-4] => (item=docker_containers) 2025-05-13 23:22:02.285939 | orchestrator | ok: [testbed-node-5] => (item=docker_containers) 2025-05-13 23:22:02.286724 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-05-13 23:22:02.287317 | orchestrator | ok: [testbed-node-1] => (item=docker_images) 2025-05-13 23:22:02.288835 | orchestrator | ok: [testbed-node-2] => (item=docker_images) 2025-05-13 23:22:02.289495 | orchestrator | ok: [testbed-node-3] => (item=docker_images) 2025-05-13 23:22:02.290583 | orchestrator | ok: [testbed-node-4] => (item=docker_images) 2025-05-13 23:22:02.291421 | orchestrator | ok: [testbed-node-5] => (item=docker_images) 2025-05-13 23:22:02.291875 | orchestrator | 2025-05-13 23:22:02.293737 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-05-13 23:22:02.294330 | orchestrator | Tuesday 13 May 2025 23:22:02 +0000 (0:00:02.714) 0:03:59.441 *********** 2025-05-13 23:22:02.404869 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:22:02.478178 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:22:02.546973 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:22:02.620268 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:22:02.696409 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:22:02.799561 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:22:02.800174 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:22:02.801849 | orchestrator | 2025-05-13 23:22:02.805685 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-05-13 23:22:02.806980 | orchestrator | Tuesday 13 May 2025 23:22:02 +0000 (0:00:00.526) 0:03:59.967 *********** 2025-05-13 23:22:03.817940 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:22:03.821929 | orchestrator | 2025-05-13 23:22:03.822004 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-05-13 23:22:03.822066 | orchestrator | Tuesday 13 May 2025 23:22:03 +0000 (0:00:01.017) 0:04:00.985 *********** 2025-05-13 23:22:04.261476 | orchestrator | ok: [testbed-manager] 2025-05-13 23:22:04.701965 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:22:04.703126 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:22:04.703314 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:22:04.704334 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:22:04.708433 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:22:04.708827 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:22:04.709788 | orchestrator | 2025-05-13 23:22:04.710613 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-05-13 23:22:04.710855 | orchestrator | Tuesday 13 May 2025 23:22:04 +0000 (0:00:00.884) 0:04:01.870 *********** 2025-05-13 23:22:05.160574 | orchestrator | ok: [testbed-manager] 2025-05-13 23:22:05.556552 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:22:05.557180 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:22:05.561074 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:22:05.562861 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:22:05.563839 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:22:05.565302 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:22:05.568816 | orchestrator | 2025-05-13 23:22:05.569350 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-05-13 23:22:05.570134 | orchestrator | Tuesday 13 May 2025 23:22:05 +0000 (0:00:00.852) 0:04:02.723 *********** 2025-05-13 23:22:05.690386 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:22:05.750389 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:22:05.819784 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:22:05.887282 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:22:05.951166 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:22:06.050151 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:22:06.050765 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:22:06.052325 | orchestrator | 2025-05-13 23:22:06.053582 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-05-13 23:22:06.057214 | orchestrator | Tuesday 13 May 2025 23:22:06 +0000 (0:00:00.498) 0:04:03.221 *********** 2025-05-13 23:22:07.459252 | orchestrator | ok: [testbed-manager] 2025-05-13 23:22:07.460177 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:22:07.460439 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:22:07.461983 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:22:07.462581 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:22:07.463168 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:22:07.464399 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:22:07.464784 | orchestrator | 2025-05-13 23:22:07.465555 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-05-13 23:22:07.466265 | orchestrator | Tuesday 13 May 2025 23:22:07 +0000 (0:00:01.406) 0:04:04.628 *********** 2025-05-13 23:22:07.616707 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:22:07.689683 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:22:07.758255 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:22:07.832696 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:22:08.094974 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:22:08.194267 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:22:08.196171 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:22:08.198147 | orchestrator | 2025-05-13 23:22:08.200747 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-05-13 23:22:08.201152 | orchestrator | Tuesday 13 May 2025 23:22:08 +0000 (0:00:00.734) 0:04:05.362 *********** 2025-05-13 23:22:15.147059 | orchestrator | ok: [testbed-manager] 2025-05-13 23:22:15.153889 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:22:15.155394 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:22:15.155434 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:22:15.155447 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:22:15.155458 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:22:15.160030 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:22:15.160076 | orchestrator | 2025-05-13 23:22:15.160135 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-05-13 23:22:15.163453 | orchestrator | Tuesday 13 May 2025 23:22:15 +0000 (0:00:06.950) 0:04:12.312 *********** 2025-05-13 23:22:16.747290 | orchestrator | ok: [testbed-manager] 2025-05-13 23:22:16.747501 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:22:16.751283 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:22:16.751321 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:22:16.751510 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:22:16.751914 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:22:16.752262 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:22:16.753208 | orchestrator | 2025-05-13 23:22:16.753380 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-05-13 23:22:16.753790 | orchestrator | Tuesday 13 May 2025 23:22:16 +0000 (0:00:01.606) 0:04:13.919 *********** 2025-05-13 23:22:18.366336 | orchestrator | ok: [testbed-manager] 2025-05-13 23:22:18.367315 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:22:18.367614 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:22:18.369088 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:22:18.370870 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:22:18.371744 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:22:18.372564 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:22:18.373178 | orchestrator | 2025-05-13 23:22:18.374587 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-05-13 23:22:18.375011 | orchestrator | Tuesday 13 May 2025 23:22:18 +0000 (0:00:01.613) 0:04:15.532 *********** 2025-05-13 23:22:20.285837 | orchestrator | ok: [testbed-manager] 2025-05-13 23:22:20.285946 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:22:20.286859 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:22:20.287575 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:22:20.288199 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:22:20.291869 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:22:20.292164 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:22:20.292754 | orchestrator | 2025-05-13 23:22:20.293301 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-05-13 23:22:20.293750 | orchestrator | Tuesday 13 May 2025 23:22:20 +0000 (0:00:01.919) 0:04:17.452 *********** 2025-05-13 23:22:20.754529 | orchestrator | ok: [testbed-manager] 2025-05-13 23:22:21.188027 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:22:21.188129 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:22:21.188870 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:22:21.189216 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:22:21.190828 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:22:21.192262 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:22:21.192795 | orchestrator | 2025-05-13 23:22:21.193806 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-05-13 23:22:21.194488 | orchestrator | Tuesday 13 May 2025 23:22:21 +0000 (0:00:00.904) 0:04:18.357 *********** 2025-05-13 23:22:21.358897 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:22:21.422406 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:22:21.487112 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:22:21.554283 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:22:21.618121 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:22:21.989929 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:22:21.990589 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:22:21.994544 | orchestrator | 2025-05-13 23:22:21.994576 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-05-13 23:22:21.994590 | orchestrator | Tuesday 13 May 2025 23:22:21 +0000 (0:00:00.803) 0:04:19.160 *********** 2025-05-13 23:22:22.102182 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:22:22.173019 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:22:22.226897 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:22:22.290449 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:22:22.354719 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:22:22.465098 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:22:22.465784 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:22:22.466899 | orchestrator | 2025-05-13 23:22:22.467601 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-05-13 23:22:22.468678 | orchestrator | Tuesday 13 May 2025 23:22:22 +0000 (0:00:00.470) 0:04:19.631 *********** 2025-05-13 23:22:22.703199 | orchestrator | ok: [testbed-manager] 2025-05-13 23:22:22.767401 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:22:22.830836 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:22:22.904845 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:22:22.972697 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:22:23.073455 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:22:23.073623 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:22:23.073990 | orchestrator | 2025-05-13 23:22:23.074133 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-05-13 23:22:23.074464 | orchestrator | Tuesday 13 May 2025 23:22:23 +0000 (0:00:00.612) 0:04:20.244 *********** 2025-05-13 23:22:23.206403 | orchestrator | ok: [testbed-manager] 2025-05-13 23:22:23.265296 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:22:23.353431 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:22:23.442122 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:22:23.504723 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:22:23.611141 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:22:23.615946 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:22:23.616611 | orchestrator | 2025-05-13 23:22:23.617158 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-05-13 23:22:23.617799 | orchestrator | Tuesday 13 May 2025 23:22:23 +0000 (0:00:00.538) 0:04:20.782 *********** 2025-05-13 23:22:23.723824 | orchestrator | ok: [testbed-manager] 2025-05-13 23:22:23.785451 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:22:23.841057 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:22:23.899217 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:22:23.966901 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:22:24.067835 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:22:24.068303 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:22:24.069176 | orchestrator | 2025-05-13 23:22:24.069866 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-05-13 23:22:24.070469 | orchestrator | Tuesday 13 May 2025 23:22:24 +0000 (0:00:00.454) 0:04:21.237 *********** 2025-05-13 23:22:29.888126 | orchestrator | ok: [testbed-manager] 2025-05-13 23:22:29.889971 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:22:29.891298 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:22:29.892310 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:22:29.895534 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:22:29.895574 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:22:29.895586 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:22:29.895598 | orchestrator | 2025-05-13 23:22:29.895962 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-05-13 23:22:29.896832 | orchestrator | Tuesday 13 May 2025 23:22:29 +0000 (0:00:05.819) 0:04:27.056 *********** 2025-05-13 23:22:30.031570 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:22:30.098679 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:22:30.179941 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:22:30.467620 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:22:30.535843 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:22:30.661456 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:22:30.661612 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:22:30.662738 | orchestrator | 2025-05-13 23:22:30.665692 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-05-13 23:22:30.665741 | orchestrator | Tuesday 13 May 2025 23:22:30 +0000 (0:00:00.771) 0:04:27.828 *********** 2025-05-13 23:22:31.504847 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:22:31.505134 | orchestrator | 2025-05-13 23:22:31.505685 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-05-13 23:22:31.506764 | orchestrator | Tuesday 13 May 2025 23:22:31 +0000 (0:00:00.847) 0:04:28.675 *********** 2025-05-13 23:22:33.436204 | orchestrator | ok: [testbed-manager] 2025-05-13 23:22:33.438404 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:22:33.439625 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:22:33.443900 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:22:33.445785 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:22:33.447092 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:22:33.448486 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:22:33.449222 | orchestrator | 2025-05-13 23:22:33.450895 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-05-13 23:22:33.451609 | orchestrator | Tuesday 13 May 2025 23:22:33 +0000 (0:00:01.929) 0:04:30.605 *********** 2025-05-13 23:22:34.648597 | orchestrator | ok: [testbed-manager] 2025-05-13 23:22:34.648949 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:22:34.651024 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:22:34.651942 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:22:34.653057 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:22:34.653792 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:22:34.654597 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:22:34.655100 | orchestrator | 2025-05-13 23:22:34.656136 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-05-13 23:22:34.657212 | orchestrator | Tuesday 13 May 2025 23:22:34 +0000 (0:00:01.210) 0:04:31.815 *********** 2025-05-13 23:22:35.080224 | orchestrator | ok: [testbed-manager] 2025-05-13 23:22:35.149282 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:22:35.727133 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:22:35.728172 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:22:35.729014 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:22:35.730777 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:22:35.731435 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:22:35.733277 | orchestrator | 2025-05-13 23:22:35.734353 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-05-13 23:22:35.735321 | orchestrator | Tuesday 13 May 2025 23:22:35 +0000 (0:00:01.079) 0:04:32.895 *********** 2025-05-13 23:22:37.512319 | orchestrator | ok: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-13 23:22:37.516952 | orchestrator | ok: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-13 23:22:37.516989 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-13 23:22:37.517001 | orchestrator | ok: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-13 23:22:37.518110 | orchestrator | ok: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-13 23:22:37.519338 | orchestrator | ok: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-13 23:22:37.521957 | orchestrator | ok: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-13 23:22:37.522950 | orchestrator | 2025-05-13 23:22:37.523725 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-05-13 23:22:37.524627 | orchestrator | Tuesday 13 May 2025 23:22:37 +0000 (0:00:01.786) 0:04:34.681 *********** 2025-05-13 23:22:38.601458 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:22:38.604531 | orchestrator | 2025-05-13 23:22:38.604848 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-05-13 23:22:38.607360 | orchestrator | Tuesday 13 May 2025 23:22:38 +0000 (0:00:01.086) 0:04:35.767 *********** 2025-05-13 23:22:46.571450 | orchestrator | ok: [testbed-manager] 2025-05-13 23:22:46.571618 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:22:46.572701 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:22:46.573816 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:22:46.574580 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:22:46.575476 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:22:46.576163 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:22:46.576564 | orchestrator | 2025-05-13 23:22:46.577191 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-05-13 23:22:46.577846 | orchestrator | Tuesday 13 May 2025 23:22:46 +0000 (0:00:07.971) 0:04:43.739 *********** 2025-05-13 23:22:48.087299 | orchestrator | ok: [testbed-manager] 2025-05-13 23:22:48.087464 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:22:48.090223 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:22:48.090276 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:22:48.090289 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:22:48.090976 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:22:48.091413 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:22:48.091777 | orchestrator | 2025-05-13 23:22:48.092232 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-05-13 23:22:48.092681 | orchestrator | Tuesday 13 May 2025 23:22:48 +0000 (0:00:01.514) 0:04:45.254 *********** 2025-05-13 23:22:49.008739 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:22:49.009846 | orchestrator | 2025-05-13 23:22:49.011064 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-05-13 23:22:49.011633 | orchestrator | Tuesday 13 May 2025 23:22:48 +0000 (0:00:00.921) 0:04:46.175 *********** 2025-05-13 23:22:49.816206 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:22:49.817277 | orchestrator | 2025-05-13 23:22:49.818700 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-05-13 23:22:49.820265 | orchestrator | 2025-05-13 23:22:49.820629 | orchestrator | TASK [Include hardening role] ************************************************** 2025-05-13 23:22:49.821966 | orchestrator | Tuesday 13 May 2025 23:22:49 +0000 (0:00:00.809) 0:04:46.985 *********** 2025-05-13 23:22:49.940543 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:22:50.020469 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:22:50.115572 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:22:50.190156 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:22:50.264153 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:22:50.405182 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:22:50.406169 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:22:50.410741 | orchestrator | 2025-05-13 23:22:50.410786 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-05-13 23:22:50.410794 | orchestrator | 2025-05-13 23:22:50.411716 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-05-13 23:22:50.411968 | orchestrator | Tuesday 13 May 2025 23:22:50 +0000 (0:00:00.588) 0:04:47.573 *********** 2025-05-13 23:22:52.300711 | orchestrator | ok: [testbed-manager] 2025-05-13 23:22:52.301132 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:22:52.305484 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:22:52.308014 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:22:52.309109 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:22:52.312291 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:22:52.312611 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:22:52.314009 | orchestrator | 2025-05-13 23:22:52.314941 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-05-13 23:22:52.315461 | orchestrator | Tuesday 13 May 2025 23:22:52 +0000 (0:00:01.816) 0:04:49.390 *********** 2025-05-13 23:22:53.112603 | orchestrator | ok: [testbed-manager] 2025-05-13 23:22:54.174763 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:22:54.175001 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:22:54.175610 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:22:54.176891 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:22:54.177525 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:22:54.178122 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:22:54.179226 | orchestrator | 2025-05-13 23:22:54.179593 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-05-13 23:22:54.180025 | orchestrator | Tuesday 13 May 2025 23:22:54 +0000 (0:00:01.949) 0:04:51.339 *********** 2025-05-13 23:22:54.326630 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:22:54.391737 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:22:54.460574 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:22:54.541014 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:22:54.610178 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:22:54.752189 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:22:54.752573 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:22:54.753174 | orchestrator | 2025-05-13 23:22:54.753872 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-05-13 23:22:54.756274 | orchestrator | Tuesday 13 May 2025 23:22:54 +0000 (0:00:00.580) 0:04:51.920 *********** 2025-05-13 23:22:55.585520 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:22:55.586558 | orchestrator | 2025-05-13 23:22:55.587329 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-05-13 23:22:55.587794 | orchestrator | 2025-05-13 23:22:55.588709 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-05-13 23:22:55.589272 | orchestrator | Tuesday 13 May 2025 23:22:55 +0000 (0:00:00.835) 0:04:52.755 *********** 2025-05-13 23:22:55.718128 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:22:55.842222 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:22:56.106359 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:22:56.170284 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:22:56.318613 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:22:56.320083 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:22:56.320576 | orchestrator | included: osism.commons.state for testbed-node-0 2025-05-13 23:22:56.321670 | orchestrator | 2025-05-13 23:22:56.322376 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-05-13 23:22:56.323128 | orchestrator | Tuesday 13 May 2025 23:22:56 +0000 (0:00:00.731) 0:04:53.486 *********** 2025-05-13 23:22:56.760005 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:22:56.760867 | orchestrator | 2025-05-13 23:22:56.761619 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-05-13 23:22:56.762235 | orchestrator | Tuesday 13 May 2025 23:22:56 +0000 (0:00:00.441) 0:04:53.927 *********** 2025-05-13 23:22:57.443314 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:22:57.443996 | orchestrator | 2025-05-13 23:22:57.445104 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-05-13 23:22:57.448680 | orchestrator | Tuesday 13 May 2025 23:22:57 +0000 (0:00:00.682) 0:04:54.610 *********** 2025-05-13 23:22:57.592994 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:22:57.735605 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:22:57.801715 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:22:57.877121 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:22:58.024203 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:22:58.025527 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:22:58.026914 | orchestrator | included: osism.commons.state for testbed-node-0 2025-05-13 23:22:58.028832 | orchestrator | 2025-05-13 23:22:58.028850 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-05-13 23:22:58.029781 | orchestrator | Tuesday 13 May 2025 23:22:58 +0000 (0:00:00.581) 0:04:55.191 *********** 2025-05-13 23:22:58.453925 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:22:58.454115 | orchestrator | 2025-05-13 23:22:58.455464 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-05-13 23:22:58.458002 | orchestrator | Tuesday 13 May 2025 23:22:58 +0000 (0:00:00.429) 0:04:55.621 *********** 2025-05-13 23:22:58.872950 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:22:58.873960 | orchestrator | 2025-05-13 23:22:58.874788 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 23:22:58.875106 | orchestrator | 2025-05-13 23:22:58 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-13 23:22:58.875210 | orchestrator | 2025-05-13 23:22:58 | INFO  | Please wait and do not abort execution. 2025-05-13 23:22:58.876867 | orchestrator | testbed-manager : ok=151  changed=7  unreachable=0 failed=0 skipped=43  rescued=0 ignored=0 2025-05-13 23:22:58.877197 | orchestrator | testbed-node-0 : ok=166  changed=27  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2025-05-13 23:22:58.878674 | orchestrator | testbed-node-1 : ok=155  changed=7  unreachable=0 failed=0 skipped=40  rescued=0 ignored=0 2025-05-13 23:22:58.878885 | orchestrator | testbed-node-2 : ok=155  changed=7  unreachable=0 failed=0 skipped=40  rescued=0 ignored=0 2025-05-13 23:22:58.879878 | orchestrator | testbed-node-3 : ok=155  changed=8  unreachable=0 failed=0 skipped=40  rescued=0 ignored=0 2025-05-13 23:22:58.880697 | orchestrator | testbed-node-4 : ok=155  changed=8  unreachable=0 failed=0 skipped=40  rescued=0 ignored=0 2025-05-13 23:22:58.881153 | orchestrator | testbed-node-5 : ok=155  changed=8  unreachable=0 failed=0 skipped=40  rescued=0 ignored=0 2025-05-13 23:22:58.881679 | orchestrator | 2025-05-13 23:22:58.882156 | orchestrator | 2025-05-13 23:22:58.882830 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 23:22:58.883395 | orchestrator | Tuesday 13 May 2025 23:22:58 +0000 (0:00:00.421) 0:04:56.043 *********** 2025-05-13 23:22:58.883896 | orchestrator | =============================================================================== 2025-05-13 23:22:58.884616 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 7.97s 2025-05-13 23:22:58.885010 | orchestrator | osism.commons.hosts : Copy /etc/hosts file ------------------------------ 7.11s 2025-05-13 23:22:58.885481 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 6.95s 2025-05-13 23:22:58.885977 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.95s 2025-05-13 23:22:58.886375 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.95s 2025-05-13 23:22:58.886868 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.82s 2025-05-13 23:22:58.887256 | orchestrator | osism.commons.sysctl : Set sysctl parameters on rabbitmq ---------------- 4.95s 2025-05-13 23:22:58.887840 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.46s 2025-05-13 23:22:58.888061 | orchestrator | osism.commons.repository : Update package cache ------------------------- 3.99s 2025-05-13 23:22:58.888489 | orchestrator | osism.services.docker : Install python3 docker package from Debian Sid --- 3.72s 2025-05-13 23:22:58.888869 | orchestrator | osism.commons.systohc : Install util-linux-extra package ---------------- 3.67s 2025-05-13 23:22:58.889261 | orchestrator | osism.services.docker : Update package cache ---------------------------- 3.29s 2025-05-13 23:22:58.889627 | orchestrator | osism.services.docker : Gather package facts ---------------------------- 3.11s 2025-05-13 23:22:58.890061 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required --- 2.98s 2025-05-13 23:22:58.890407 | orchestrator | osism.commons.packages : Upgrade packages ------------------------------- 2.79s 2025-05-13 23:22:58.890813 | orchestrator | osism.services.docker : Copy docker fact files -------------------------- 2.71s 2025-05-13 23:22:58.891212 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 2.50s 2025-05-13 23:22:58.891664 | orchestrator | osism.services.rsyslog : Install rsyslog package ------------------------ 2.37s 2025-05-13 23:22:58.891984 | orchestrator | osism.commons.packages : Download upgrade packages ---------------------- 2.29s 2025-05-13 23:22:58.892172 | orchestrator | osism.commons.timezone : Install tzdata package ------------------------- 2.28s 2025-05-13 23:22:59.698377 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-05-13 23:22:59.698451 | orchestrator | + osism apply network 2025-05-13 23:23:01.727194 | orchestrator | 2025-05-13 23:23:01 | INFO  | Task 75bfe100-fda0-4abe-8f5f-7c7199913760 (network) was prepared for execution. 2025-05-13 23:23:01.727297 | orchestrator | 2025-05-13 23:23:01 | INFO  | It takes a moment until task 75bfe100-fda0-4abe-8f5f-7c7199913760 (network) has been started and output is visible here. 2025-05-13 23:23:05.852385 | orchestrator | 2025-05-13 23:23:05.853036 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-05-13 23:23:05.855216 | orchestrator | 2025-05-13 23:23:05.855382 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-05-13 23:23:05.856608 | orchestrator | Tuesday 13 May 2025 23:23:05 +0000 (0:00:00.280) 0:00:00.280 *********** 2025-05-13 23:23:05.999460 | orchestrator | ok: [testbed-manager] 2025-05-13 23:23:06.076484 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:23:06.153472 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:23:06.229426 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:23:06.413956 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:23:06.536269 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:23:06.538189 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:23:06.541298 | orchestrator | 2025-05-13 23:23:06.541794 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-05-13 23:23:06.542303 | orchestrator | Tuesday 13 May 2025 23:23:06 +0000 (0:00:00.682) 0:00:00.963 *********** 2025-05-13 23:23:07.754559 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:23:07.756487 | orchestrator | 2025-05-13 23:23:07.756529 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-05-13 23:23:07.757197 | orchestrator | Tuesday 13 May 2025 23:23:07 +0000 (0:00:01.217) 0:00:02.180 *********** 2025-05-13 23:23:09.678704 | orchestrator | ok: [testbed-manager] 2025-05-13 23:23:09.679189 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:23:09.682598 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:23:09.682634 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:23:09.683345 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:23:09.684500 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:23:09.686103 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:23:09.686468 | orchestrator | 2025-05-13 23:23:09.687378 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-05-13 23:23:09.687993 | orchestrator | Tuesday 13 May 2025 23:23:09 +0000 (0:00:01.927) 0:00:04.108 *********** 2025-05-13 23:23:11.425131 | orchestrator | ok: [testbed-manager] 2025-05-13 23:23:11.426299 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:23:11.426332 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:23:11.427570 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:23:11.428686 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:23:11.429135 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:23:11.430288 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:23:11.431300 | orchestrator | 2025-05-13 23:23:11.431833 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-05-13 23:23:11.433100 | orchestrator | Tuesday 13 May 2025 23:23:11 +0000 (0:00:01.741) 0:00:05.849 *********** 2025-05-13 23:23:11.927092 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-05-13 23:23:12.565267 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-05-13 23:23:12.566569 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-05-13 23:23:12.568120 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-05-13 23:23:12.569269 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-05-13 23:23:12.570148 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-05-13 23:23:12.570840 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-05-13 23:23:12.572797 | orchestrator | 2025-05-13 23:23:12.572838 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-05-13 23:23:12.574281 | orchestrator | Tuesday 13 May 2025 23:23:12 +0000 (0:00:01.143) 0:00:06.992 *********** 2025-05-13 23:23:15.860218 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-13 23:23:15.860851 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-13 23:23:15.862273 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-13 23:23:15.862328 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-13 23:23:15.862976 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-13 23:23:15.864936 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-13 23:23:15.865612 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-13 23:23:15.867418 | orchestrator | 2025-05-13 23:23:15.867862 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-05-13 23:23:15.869134 | orchestrator | Tuesday 13 May 2025 23:23:15 +0000 (0:00:03.296) 0:00:10.289 *********** 2025-05-13 23:23:17.635351 | orchestrator | changed: [testbed-manager] 2025-05-13 23:23:17.635494 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:23:17.636736 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:23:17.639788 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:23:17.639808 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:23:17.639816 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:23:17.640522 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:23:17.642163 | orchestrator | 2025-05-13 23:23:17.642828 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-05-13 23:23:17.643470 | orchestrator | Tuesday 13 May 2025 23:23:17 +0000 (0:00:01.771) 0:00:12.061 *********** 2025-05-13 23:23:19.369774 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-13 23:23:19.370351 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-13 23:23:19.371364 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-13 23:23:19.373361 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-13 23:23:19.375089 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-13 23:23:19.375426 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-13 23:23:19.376234 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-13 23:23:19.377429 | orchestrator | 2025-05-13 23:23:19.377924 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-05-13 23:23:19.379745 | orchestrator | Tuesday 13 May 2025 23:23:19 +0000 (0:00:01.738) 0:00:13.799 *********** 2025-05-13 23:23:19.809972 | orchestrator | ok: [testbed-manager] 2025-05-13 23:23:19.894605 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:23:20.011027 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:23:20.424936 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:23:20.425014 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:23:20.425021 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:23:20.425026 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:23:20.425032 | orchestrator | 2025-05-13 23:23:20.425038 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-05-13 23:23:20.425082 | orchestrator | Tuesday 13 May 2025 23:23:20 +0000 (0:00:01.048) 0:00:14.848 *********** 2025-05-13 23:23:20.566241 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:23:20.637746 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:23:20.709183 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:23:20.785418 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:23:20.871718 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:23:21.011221 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:23:21.015024 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:23:21.015085 | orchestrator | 2025-05-13 23:23:21.015096 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-05-13 23:23:21.015105 | orchestrator | Tuesday 13 May 2025 23:23:21 +0000 (0:00:00.593) 0:00:15.441 *********** 2025-05-13 23:23:23.060878 | orchestrator | ok: [testbed-manager] 2025-05-13 23:23:23.061019 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:23:23.061092 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:23:23.061550 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:23:23.062237 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:23:23.062814 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:23:23.063442 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:23:23.064187 | orchestrator | 2025-05-13 23:23:23.064703 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-05-13 23:23:23.065271 | orchestrator | Tuesday 13 May 2025 23:23:23 +0000 (0:00:02.044) 0:00:17.485 *********** 2025-05-13 23:23:23.292514 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:23:23.375735 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:23:23.502568 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:23:23.836134 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:23:23.836717 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-05-13 23:23:23.963295 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:23:23.964442 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:23:23.965612 | orchestrator | 2025-05-13 23:23:23.966688 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-05-13 23:23:23.968123 | orchestrator | Tuesday 13 May 2025 23:23:23 +0000 (0:00:00.904) 0:00:18.390 *********** 2025-05-13 23:23:25.453395 | orchestrator | ok: [testbed-manager] 2025-05-13 23:23:25.453793 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:23:25.454737 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:23:25.455919 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:23:25.455944 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:23:25.455955 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:23:25.456249 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:23:25.457006 | orchestrator | 2025-05-13 23:23:25.457029 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-05-13 23:23:25.458913 | orchestrator | Tuesday 13 May 2025 23:23:25 +0000 (0:00:01.491) 0:00:19.881 *********** 2025-05-13 23:23:26.785936 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:23:26.786103 | orchestrator | 2025-05-13 23:23:26.786123 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-05-13 23:23:26.786477 | orchestrator | Tuesday 13 May 2025 23:23:26 +0000 (0:00:01.329) 0:00:21.210 *********** 2025-05-13 23:23:27.318589 | orchestrator | ok: [testbed-manager] 2025-05-13 23:23:27.407260 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:23:28.060049 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:23:28.064844 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:23:28.064884 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:23:28.064896 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:23:28.064908 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:23:28.064919 | orchestrator | 2025-05-13 23:23:28.064932 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-05-13 23:23:28.065272 | orchestrator | Tuesday 13 May 2025 23:23:28 +0000 (0:00:01.271) 0:00:22.482 *********** 2025-05-13 23:23:28.237161 | orchestrator | ok: [testbed-manager] 2025-05-13 23:23:28.343089 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:23:28.424620 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:23:28.524860 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:23:28.641354 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:23:28.964174 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:23:28.964919 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:23:28.965851 | orchestrator | 2025-05-13 23:23:28.967266 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-05-13 23:23:28.970381 | orchestrator | Tuesday 13 May 2025 23:23:28 +0000 (0:00:00.908) 0:00:23.391 *********** 2025-05-13 23:23:29.377310 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-13 23:23:29.377417 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-05-13 23:23:29.471453 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-13 23:23:29.471833 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-05-13 23:23:29.563396 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-13 23:23:29.563501 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-05-13 23:23:30.025834 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-13 23:23:30.027047 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-13 23:23:30.027808 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-05-13 23:23:30.028954 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-05-13 23:23:30.029775 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-13 23:23:30.030444 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-05-13 23:23:30.031601 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-13 23:23:30.032235 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-05-13 23:23:30.033071 | orchestrator | 2025-05-13 23:23:30.033680 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-05-13 23:23:30.035569 | orchestrator | Tuesday 13 May 2025 23:23:30 +0000 (0:00:01.063) 0:00:24.454 *********** 2025-05-13 23:23:30.186116 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:23:30.264926 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:23:30.343048 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:23:30.621500 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:23:30.712774 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:23:30.847596 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:23:30.851315 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:23:30.851351 | orchestrator | 2025-05-13 23:23:30.851366 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-05-13 23:23:30.851378 | orchestrator | Tuesday 13 May 2025 23:23:30 +0000 (0:00:00.818) 0:00:25.272 *********** 2025-05-13 23:23:34.595868 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-1, testbed-manager, testbed-node-0, testbed-node-4, testbed-node-2, testbed-node-3, testbed-node-5 2025-05-13 23:23:34.596059 | orchestrator | 2025-05-13 23:23:34.599525 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2025-05-13 23:23:34.599559 | orchestrator | Tuesday 13 May 2025 23:23:34 +0000 (0:00:03.748) 0:00:29.021 *********** 2025-05-13 23:23:39.851627 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-05-13 23:23:39.851877 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-05-13 23:23:39.852068 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-05-13 23:23:39.856729 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-05-13 23:23:39.857123 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-05-13 23:23:39.858156 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-05-13 23:23:39.858692 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-05-13 23:23:39.859008 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-05-13 23:23:39.859514 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-05-13 23:23:39.860075 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-05-13 23:23:39.860668 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-05-13 23:23:39.860902 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-05-13 23:23:39.861550 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-05-13 23:23:39.862430 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-05-13 23:23:39.863217 | orchestrator | 2025-05-13 23:23:39.863728 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2025-05-13 23:23:39.864539 | orchestrator | Tuesday 13 May 2025 23:23:39 +0000 (0:00:05.258) 0:00:34.279 *********** 2025-05-13 23:23:44.558112 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-05-13 23:23:44.558223 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-05-13 23:23:44.558234 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-05-13 23:23:44.559316 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-05-13 23:23:44.560610 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-05-13 23:23:44.564118 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-05-13 23:23:44.565717 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-05-13 23:23:44.566763 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-05-13 23:23:44.567874 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-05-13 23:23:44.568768 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-05-13 23:23:44.569740 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-05-13 23:23:44.570961 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-05-13 23:23:44.571382 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-05-13 23:23:44.572182 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-05-13 23:23:44.572814 | orchestrator | 2025-05-13 23:23:44.573699 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2025-05-13 23:23:44.575086 | orchestrator | Tuesday 13 May 2025 23:23:44 +0000 (0:00:04.706) 0:00:38.986 *********** 2025-05-13 23:23:45.868730 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:23:45.869061 | orchestrator | 2025-05-13 23:23:45.869808 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-05-13 23:23:45.870832 | orchestrator | Tuesday 13 May 2025 23:23:45 +0000 (0:00:01.307) 0:00:40.294 *********** 2025-05-13 23:23:46.495509 | orchestrator | ok: [testbed-manager] 2025-05-13 23:23:46.585822 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:23:46.677112 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:23:47.176584 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:23:47.178357 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:23:47.179635 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:23:47.180961 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:23:47.181592 | orchestrator | 2025-05-13 23:23:47.182472 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-05-13 23:23:47.182920 | orchestrator | Tuesday 13 May 2025 23:23:47 +0000 (0:00:01.305) 0:00:41.599 *********** 2025-05-13 23:23:47.297301 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2025-05-13 23:23:47.297458 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-05-13 23:23:47.297794 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2025-05-13 23:23:47.299950 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-05-13 23:23:47.396231 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:23:47.397946 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2025-05-13 23:23:47.399070 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-05-13 23:23:47.400780 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2025-05-13 23:23:47.510113 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-05-13 23:23:47.511539 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2025-05-13 23:23:47.512588 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-05-13 23:23:47.513308 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2025-05-13 23:23:47.516942 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-05-13 23:23:47.618543 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:23:47.618781 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2025-05-13 23:23:47.619444 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-05-13 23:23:47.620329 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2025-05-13 23:23:47.622286 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-05-13 23:23:47.712797 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:23:47.713139 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2025-05-13 23:23:47.713639 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-05-13 23:23:47.714157 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2025-05-13 23:23:47.714580 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-05-13 23:23:47.805391 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:23:47.805863 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2025-05-13 23:23:47.806474 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-05-13 23:23:47.807191 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2025-05-13 23:23:47.808339 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-05-13 23:23:49.284075 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:23:49.284178 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:23:49.287379 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2025-05-13 23:23:49.288394 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-05-13 23:23:49.289372 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2025-05-13 23:23:49.290502 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-05-13 23:23:49.291135 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:23:49.292419 | orchestrator | 2025-05-13 23:23:49.293853 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2025-05-13 23:23:49.294679 | orchestrator | Tuesday 13 May 2025 23:23:49 +0000 (0:00:02.105) 0:00:43.704 *********** 2025-05-13 23:23:49.445977 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:23:49.536737 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:23:49.621106 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:23:49.706538 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:23:49.792094 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:23:49.908720 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:23:49.909774 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:23:49.910882 | orchestrator | 2025-05-13 23:23:49.912058 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-05-13 23:23:49.912894 | orchestrator | Tuesday 13 May 2025 23:23:49 +0000 (0:00:00.633) 0:00:44.338 *********** 2025-05-13 23:23:50.074307 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:23:50.159942 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:23:50.434933 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:23:50.529024 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:23:50.617301 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:23:50.649093 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:23:50.649884 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:23:50.650990 | orchestrator | 2025-05-13 23:23:50.651570 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 23:23:50.652139 | orchestrator | 2025-05-13 23:23:50 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-13 23:23:50.652285 | orchestrator | 2025-05-13 23:23:50 | INFO  | Please wait and do not abort execution. 2025-05-13 23:23:50.652819 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-13 23:23:50.653408 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-13 23:23:50.654252 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-13 23:23:50.654601 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-13 23:23:50.655248 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-13 23:23:50.655765 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-13 23:23:50.656342 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-13 23:23:50.656838 | orchestrator | 2025-05-13 23:23:50.657432 | orchestrator | 2025-05-13 23:23:50.658248 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 23:23:50.658756 | orchestrator | Tuesday 13 May 2025 23:23:50 +0000 (0:00:00.741) 0:00:45.079 *********** 2025-05-13 23:23:50.658999 | orchestrator | =============================================================================== 2025-05-13 23:23:50.659331 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.26s 2025-05-13 23:23:50.659806 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 4.71s 2025-05-13 23:23:50.660081 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 3.75s 2025-05-13 23:23:50.660458 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.30s 2025-05-13 23:23:50.661074 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 2.11s 2025-05-13 23:23:50.661529 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.04s 2025-05-13 23:23:50.661913 | orchestrator | osism.commons.network : Install required packages ----------------------- 1.93s 2025-05-13 23:23:50.662391 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.77s 2025-05-13 23:23:50.662692 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.74s 2025-05-13 23:23:50.664112 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.74s 2025-05-13 23:23:50.664664 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.49s 2025-05-13 23:23:50.665376 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.33s 2025-05-13 23:23:50.666154 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.31s 2025-05-13 23:23:50.666499 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.31s 2025-05-13 23:23:50.667221 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.27s 2025-05-13 23:23:50.667879 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.22s 2025-05-13 23:23:50.668495 | orchestrator | osism.commons.network : Create required directories --------------------- 1.14s 2025-05-13 23:23:50.669304 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.06s 2025-05-13 23:23:50.670121 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.05s 2025-05-13 23:23:50.670552 | orchestrator | osism.commons.network : Set network_configured_files fact --------------- 0.91s 2025-05-13 23:23:51.350321 | orchestrator | + osism apply wireguard 2025-05-13 23:23:53.188955 | orchestrator | 2025-05-13 23:23:53 | INFO  | Task 50342180-6e8d-4f09-8e95-d68d2fca54bc (wireguard) was prepared for execution. 2025-05-13 23:23:53.189057 | orchestrator | 2025-05-13 23:23:53 | INFO  | It takes a moment until task 50342180-6e8d-4f09-8e95-d68d2fca54bc (wireguard) has been started and output is visible here. 2025-05-13 23:23:57.439333 | orchestrator | 2025-05-13 23:23:57.439864 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-05-13 23:23:57.444923 | orchestrator | 2025-05-13 23:23:57.446317 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-05-13 23:23:57.447850 | orchestrator | Tuesday 13 May 2025 23:23:57 +0000 (0:00:00.238) 0:00:00.238 *********** 2025-05-13 23:23:58.993725 | orchestrator | ok: [testbed-manager] 2025-05-13 23:23:58.994304 | orchestrator | 2025-05-13 23:23:58.995479 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-05-13 23:23:58.997145 | orchestrator | Tuesday 13 May 2025 23:23:58 +0000 (0:00:01.556) 0:00:01.795 *********** 2025-05-13 23:24:05.254150 | orchestrator | changed: [testbed-manager] 2025-05-13 23:24:05.254520 | orchestrator | 2025-05-13 23:24:05.255274 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-05-13 23:24:05.255905 | orchestrator | Tuesday 13 May 2025 23:24:05 +0000 (0:00:06.260) 0:00:08.055 *********** 2025-05-13 23:24:05.812235 | orchestrator | changed: [testbed-manager] 2025-05-13 23:24:05.812840 | orchestrator | 2025-05-13 23:24:05.814089 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-05-13 23:24:05.814892 | orchestrator | Tuesday 13 May 2025 23:24:05 +0000 (0:00:00.558) 0:00:08.614 *********** 2025-05-13 23:24:06.277201 | orchestrator | changed: [testbed-manager] 2025-05-13 23:24:06.277580 | orchestrator | 2025-05-13 23:24:06.278454 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-05-13 23:24:06.279708 | orchestrator | Tuesday 13 May 2025 23:24:06 +0000 (0:00:00.465) 0:00:09.079 *********** 2025-05-13 23:24:06.955996 | orchestrator | ok: [testbed-manager] 2025-05-13 23:24:06.956761 | orchestrator | 2025-05-13 23:24:06.958568 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-05-13 23:24:06.958886 | orchestrator | Tuesday 13 May 2025 23:24:06 +0000 (0:00:00.679) 0:00:09.758 *********** 2025-05-13 23:24:07.381421 | orchestrator | ok: [testbed-manager] 2025-05-13 23:24:07.381721 | orchestrator | 2025-05-13 23:24:07.382288 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-05-13 23:24:07.383087 | orchestrator | Tuesday 13 May 2025 23:24:07 +0000 (0:00:00.423) 0:00:10.182 *********** 2025-05-13 23:24:07.783117 | orchestrator | ok: [testbed-manager] 2025-05-13 23:24:07.783241 | orchestrator | 2025-05-13 23:24:07.783765 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-05-13 23:24:07.784866 | orchestrator | Tuesday 13 May 2025 23:24:07 +0000 (0:00:00.403) 0:00:10.586 *********** 2025-05-13 23:24:09.048756 | orchestrator | changed: [testbed-manager] 2025-05-13 23:24:09.048929 | orchestrator | 2025-05-13 23:24:09.048950 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-05-13 23:24:09.049901 | orchestrator | Tuesday 13 May 2025 23:24:09 +0000 (0:00:01.264) 0:00:11.851 *********** 2025-05-13 23:24:10.023278 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-13 23:24:10.024781 | orchestrator | changed: [testbed-manager] 2025-05-13 23:24:10.024816 | orchestrator | 2025-05-13 23:24:10.025215 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-05-13 23:24:10.026426 | orchestrator | Tuesday 13 May 2025 23:24:10 +0000 (0:00:00.971) 0:00:12.822 *********** 2025-05-13 23:24:11.847743 | orchestrator | changed: [testbed-manager] 2025-05-13 23:24:11.848283 | orchestrator | 2025-05-13 23:24:11.849486 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-05-13 23:24:11.852136 | orchestrator | Tuesday 13 May 2025 23:24:11 +0000 (0:00:01.826) 0:00:14.649 *********** 2025-05-13 23:24:12.812080 | orchestrator | changed: [testbed-manager] 2025-05-13 23:24:12.812269 | orchestrator | 2025-05-13 23:24:12.813470 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 23:24:12.813550 | orchestrator | 2025-05-13 23:24:12 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-13 23:24:12.813566 | orchestrator | 2025-05-13 23:24:12 | INFO  | Please wait and do not abort execution. 2025-05-13 23:24:12.814310 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 23:24:12.814859 | orchestrator | 2025-05-13 23:24:12.815229 | orchestrator | 2025-05-13 23:24:12.816350 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 23:24:12.816795 | orchestrator | Tuesday 13 May 2025 23:24:12 +0000 (0:00:00.964) 0:00:15.614 *********** 2025-05-13 23:24:12.817262 | orchestrator | =============================================================================== 2025-05-13 23:24:12.818245 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.26s 2025-05-13 23:24:12.818805 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.83s 2025-05-13 23:24:12.819525 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.56s 2025-05-13 23:24:12.820078 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.26s 2025-05-13 23:24:12.820720 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.97s 2025-05-13 23:24:12.821636 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.97s 2025-05-13 23:24:12.822048 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.68s 2025-05-13 23:24:12.822345 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.56s 2025-05-13 23:24:12.822968 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.47s 2025-05-13 23:24:12.823352 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.42s 2025-05-13 23:24:12.823887 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.40s 2025-05-13 23:24:13.505874 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-05-13 23:24:13.544936 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-05-13 23:24:13.545023 | orchestrator | Dload Upload Total Spent Left Speed 2025-05-13 23:24:13.622885 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 179 0 --:--:-- --:--:-- --:--:-- 179 2025-05-13 23:24:13.638241 | orchestrator | + osism apply --environment custom workarounds 2025-05-13 23:24:15.389852 | orchestrator | 2025-05-13 23:24:15 | INFO  | Trying to run play workarounds in environment custom 2025-05-13 23:24:15.451992 | orchestrator | 2025-05-13 23:24:15 | INFO  | Task b9734b76-9531-4d59-aae8-304daaa55840 (workarounds) was prepared for execution. 2025-05-13 23:24:15.452093 | orchestrator | 2025-05-13 23:24:15 | INFO  | It takes a moment until task b9734b76-9531-4d59-aae8-304daaa55840 (workarounds) has been started and output is visible here. 2025-05-13 23:24:19.403447 | orchestrator | 2025-05-13 23:24:19.403729 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-13 23:24:19.405060 | orchestrator | 2025-05-13 23:24:19.407012 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-05-13 23:24:19.407988 | orchestrator | Tuesday 13 May 2025 23:24:19 +0000 (0:00:00.143) 0:00:00.143 *********** 2025-05-13 23:24:19.560942 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-05-13 23:24:19.632641 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-05-13 23:24:19.709570 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-05-13 23:24:19.784258 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-05-13 23:24:19.934370 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-05-13 23:24:20.086749 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-05-13 23:24:20.088209 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-05-13 23:24:20.088243 | orchestrator | 2025-05-13 23:24:20.088257 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-05-13 23:24:20.088467 | orchestrator | 2025-05-13 23:24:20.088981 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-05-13 23:24:20.089336 | orchestrator | Tuesday 13 May 2025 23:24:20 +0000 (0:00:00.685) 0:00:00.829 *********** 2025-05-13 23:24:22.761300 | orchestrator | ok: [testbed-manager] 2025-05-13 23:24:22.762378 | orchestrator | 2025-05-13 23:24:22.763589 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-05-13 23:24:22.764123 | orchestrator | 2025-05-13 23:24:22.764526 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-05-13 23:24:22.765356 | orchestrator | Tuesday 13 May 2025 23:24:22 +0000 (0:00:02.649) 0:00:03.479 *********** 2025-05-13 23:24:24.770164 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:24:24.771374 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:24:24.772225 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:24:24.773387 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:24:24.774736 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:24:24.775540 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:24:24.776761 | orchestrator | 2025-05-13 23:24:24.778549 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-05-13 23:24:24.781449 | orchestrator | 2025-05-13 23:24:24.782515 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-05-13 23:24:24.783004 | orchestrator | Tuesday 13 May 2025 23:24:24 +0000 (0:00:02.024) 0:00:05.504 *********** 2025-05-13 23:24:26.358403 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-13 23:24:26.358971 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-13 23:24:26.360003 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-13 23:24:26.360633 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-13 23:24:26.361806 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-13 23:24:26.362203 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-13 23:24:26.362769 | orchestrator | 2025-05-13 23:24:26.363560 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-05-13 23:24:26.363918 | orchestrator | Tuesday 13 May 2025 23:24:26 +0000 (0:00:01.595) 0:00:07.099 *********** 2025-05-13 23:24:30.070788 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:24:30.071747 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:24:30.074577 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:24:30.076273 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:24:30.079411 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:24:30.079897 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:24:30.080837 | orchestrator | 2025-05-13 23:24:30.083241 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-05-13 23:24:30.084952 | orchestrator | Tuesday 13 May 2025 23:24:30 +0000 (0:00:03.712) 0:00:10.812 *********** 2025-05-13 23:24:30.234752 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:24:30.314698 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:24:30.389474 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:24:30.634008 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:24:30.776185 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:24:30.776613 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:24:30.778452 | orchestrator | 2025-05-13 23:24:30.780414 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-05-13 23:24:30.780716 | orchestrator | 2025-05-13 23:24:30.781923 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-05-13 23:24:30.782826 | orchestrator | Tuesday 13 May 2025 23:24:30 +0000 (0:00:00.706) 0:00:11.518 *********** 2025-05-13 23:24:32.449922 | orchestrator | changed: [testbed-manager] 2025-05-13 23:24:32.453259 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:24:32.453370 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:24:32.455138 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:24:32.457170 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:24:32.458157 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:24:32.459801 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:24:32.461072 | orchestrator | 2025-05-13 23:24:32.462225 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-05-13 23:24:32.463525 | orchestrator | Tuesday 13 May 2025 23:24:32 +0000 (0:00:01.671) 0:00:13.190 *********** 2025-05-13 23:24:34.123478 | orchestrator | changed: [testbed-manager] 2025-05-13 23:24:34.126744 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:24:34.128751 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:24:34.130201 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:24:34.131467 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:24:34.132243 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:24:34.134910 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:24:34.135434 | orchestrator | 2025-05-13 23:24:34.136490 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-05-13 23:24:34.137209 | orchestrator | Tuesday 13 May 2025 23:24:34 +0000 (0:00:01.669) 0:00:14.860 *********** 2025-05-13 23:24:35.605556 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:24:35.605716 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:24:35.606802 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:24:35.607710 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:24:35.608182 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:24:35.609001 | orchestrator | ok: [testbed-manager] 2025-05-13 23:24:35.609866 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:24:35.617007 | orchestrator | 2025-05-13 23:24:35.617230 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-05-13 23:24:35.618085 | orchestrator | Tuesday 13 May 2025 23:24:35 +0000 (0:00:01.484) 0:00:16.344 *********** 2025-05-13 23:24:37.481070 | orchestrator | changed: [testbed-manager] 2025-05-13 23:24:37.481963 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:24:37.483326 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:24:37.484606 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:24:37.486234 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:24:37.486976 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:24:37.487641 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:24:37.488508 | orchestrator | 2025-05-13 23:24:37.489506 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-05-13 23:24:37.490206 | orchestrator | Tuesday 13 May 2025 23:24:37 +0000 (0:00:01.874) 0:00:18.219 *********** 2025-05-13 23:24:37.666827 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:24:37.761180 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:24:37.838486 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:24:37.918254 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:24:37.995628 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:24:38.344328 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:24:38.345775 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:24:38.345927 | orchestrator | 2025-05-13 23:24:38.347785 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-05-13 23:24:38.348678 | orchestrator | 2025-05-13 23:24:38.350158 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-05-13 23:24:38.351204 | orchestrator | Tuesday 13 May 2025 23:24:38 +0000 (0:00:00.865) 0:00:19.085 *********** 2025-05-13 23:24:40.817386 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:24:40.817498 | orchestrator | ok: [testbed-manager] 2025-05-13 23:24:40.817994 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:24:40.818800 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:24:40.819908 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:24:40.820323 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:24:40.820750 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:24:40.822807 | orchestrator | 2025-05-13 23:24:40.824051 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 23:24:40.824476 | orchestrator | 2025-05-13 23:24:40 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-13 23:24:40.824709 | orchestrator | 2025-05-13 23:24:40 | INFO  | Please wait and do not abort execution. 2025-05-13 23:24:40.826089 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-13 23:24:40.826741 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-13 23:24:40.827813 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-13 23:24:40.829349 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-13 23:24:40.830637 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-13 23:24:40.831781 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-13 23:24:40.832304 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-13 23:24:40.833205 | orchestrator | 2025-05-13 23:24:40.833916 | orchestrator | 2025-05-13 23:24:40.834992 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 23:24:40.836383 | orchestrator | Tuesday 13 May 2025 23:24:40 +0000 (0:00:02.473) 0:00:21.558 *********** 2025-05-13 23:24:40.837127 | orchestrator | =============================================================================== 2025-05-13 23:24:40.838180 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.71s 2025-05-13 23:24:40.838925 | orchestrator | Apply netplan configuration --------------------------------------------- 2.65s 2025-05-13 23:24:40.839936 | orchestrator | Install python3-docker -------------------------------------------------- 2.47s 2025-05-13 23:24:40.840881 | orchestrator | Apply netplan configuration --------------------------------------------- 2.02s 2025-05-13 23:24:40.841553 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.87s 2025-05-13 23:24:40.842242 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.67s 2025-05-13 23:24:40.842827 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.67s 2025-05-13 23:24:40.843337 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.60s 2025-05-13 23:24:40.844099 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.48s 2025-05-13 23:24:40.844787 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.87s 2025-05-13 23:24:40.845330 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.71s 2025-05-13 23:24:40.845979 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.69s 2025-05-13 23:24:41.450266 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-05-13 23:24:43.213940 | orchestrator | 2025-05-13 23:24:43 | INFO  | Task 47b9cc96-7211-4064-a262-9f23d6931f97 (reboot) was prepared for execution. 2025-05-13 23:24:43.214096 | orchestrator | 2025-05-13 23:24:43 | INFO  | It takes a moment until task 47b9cc96-7211-4064-a262-9f23d6931f97 (reboot) has been started and output is visible here. 2025-05-13 23:24:47.281744 | orchestrator | 2025-05-13 23:24:47.285471 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-13 23:24:47.286327 | orchestrator | 2025-05-13 23:24:47.287966 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-13 23:24:47.288573 | orchestrator | Tuesday 13 May 2025 23:24:47 +0000 (0:00:00.212) 0:00:00.212 *********** 2025-05-13 23:24:47.396549 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:24:47.396757 | orchestrator | 2025-05-13 23:24:47.400967 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-13 23:24:47.402118 | orchestrator | Tuesday 13 May 2025 23:24:47 +0000 (0:00:00.117) 0:00:00.329 *********** 2025-05-13 23:24:48.341392 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:24:48.341814 | orchestrator | 2025-05-13 23:24:48.342520 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-13 23:24:48.343254 | orchestrator | Tuesday 13 May 2025 23:24:48 +0000 (0:00:00.944) 0:00:01.274 *********** 2025-05-13 23:24:48.461546 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:24:48.463328 | orchestrator | 2025-05-13 23:24:48.464628 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-13 23:24:48.465974 | orchestrator | 2025-05-13 23:24:48.466169 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-13 23:24:48.467239 | orchestrator | Tuesday 13 May 2025 23:24:48 +0000 (0:00:00.121) 0:00:01.395 *********** 2025-05-13 23:24:48.558894 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:24:48.559114 | orchestrator | 2025-05-13 23:24:48.559804 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-13 23:24:48.561821 | orchestrator | Tuesday 13 May 2025 23:24:48 +0000 (0:00:00.097) 0:00:01.492 *********** 2025-05-13 23:24:49.258280 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:24:49.259178 | orchestrator | 2025-05-13 23:24:49.260046 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-13 23:24:49.260950 | orchestrator | Tuesday 13 May 2025 23:24:49 +0000 (0:00:00.698) 0:00:02.191 *********** 2025-05-13 23:24:49.382887 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:24:49.384252 | orchestrator | 2025-05-13 23:24:49.384981 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-13 23:24:49.386781 | orchestrator | 2025-05-13 23:24:49.386812 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-13 23:24:49.387767 | orchestrator | Tuesday 13 May 2025 23:24:49 +0000 (0:00:00.120) 0:00:02.312 *********** 2025-05-13 23:24:49.605549 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:24:49.606872 | orchestrator | 2025-05-13 23:24:49.607644 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-13 23:24:49.610357 | orchestrator | Tuesday 13 May 2025 23:24:49 +0000 (0:00:00.226) 0:00:02.539 *********** 2025-05-13 23:24:50.291848 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:24:50.292637 | orchestrator | 2025-05-13 23:24:50.294006 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-13 23:24:50.294531 | orchestrator | Tuesday 13 May 2025 23:24:50 +0000 (0:00:00.685) 0:00:03.225 *********** 2025-05-13 23:24:50.421054 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:24:50.421534 | orchestrator | 2025-05-13 23:24:50.422096 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-13 23:24:50.422746 | orchestrator | 2025-05-13 23:24:50.423230 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-13 23:24:50.423825 | orchestrator | Tuesday 13 May 2025 23:24:50 +0000 (0:00:00.123) 0:00:03.349 *********** 2025-05-13 23:24:50.525495 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:24:50.525633 | orchestrator | 2025-05-13 23:24:50.526487 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-13 23:24:50.528243 | orchestrator | Tuesday 13 May 2025 23:24:50 +0000 (0:00:00.109) 0:00:03.458 *********** 2025-05-13 23:24:51.201877 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:24:51.201985 | orchestrator | 2025-05-13 23:24:51.202070 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-13 23:24:51.202793 | orchestrator | Tuesday 13 May 2025 23:24:51 +0000 (0:00:00.676) 0:00:04.135 *********** 2025-05-13 23:24:51.349501 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:24:51.349994 | orchestrator | 2025-05-13 23:24:51.352005 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-13 23:24:51.355126 | orchestrator | 2025-05-13 23:24:51.355158 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-13 23:24:51.356112 | orchestrator | Tuesday 13 May 2025 23:24:51 +0000 (0:00:00.144) 0:00:04.279 *********** 2025-05-13 23:24:51.470100 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:24:51.470172 | orchestrator | 2025-05-13 23:24:51.470225 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-13 23:24:51.470412 | orchestrator | Tuesday 13 May 2025 23:24:51 +0000 (0:00:00.123) 0:00:04.403 *********** 2025-05-13 23:24:52.176167 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:24:52.176882 | orchestrator | 2025-05-13 23:24:52.177304 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-13 23:24:52.178225 | orchestrator | Tuesday 13 May 2025 23:24:52 +0000 (0:00:00.704) 0:00:05.107 *********** 2025-05-13 23:24:52.311124 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:24:52.311953 | orchestrator | 2025-05-13 23:24:52.313252 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-13 23:24:52.313827 | orchestrator | 2025-05-13 23:24:52.314198 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-13 23:24:52.315934 | orchestrator | Tuesday 13 May 2025 23:24:52 +0000 (0:00:00.136) 0:00:05.244 *********** 2025-05-13 23:24:52.426868 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:24:52.427422 | orchestrator | 2025-05-13 23:24:52.428208 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-13 23:24:52.428941 | orchestrator | Tuesday 13 May 2025 23:24:52 +0000 (0:00:00.115) 0:00:05.359 *********** 2025-05-13 23:24:53.113530 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:24:53.114429 | orchestrator | 2025-05-13 23:24:53.116462 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-13 23:24:53.117937 | orchestrator | Tuesday 13 May 2025 23:24:53 +0000 (0:00:00.686) 0:00:06.046 *********** 2025-05-13 23:24:53.146520 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:24:53.146855 | orchestrator | 2025-05-13 23:24:53.147480 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 23:24:53.147874 | orchestrator | 2025-05-13 23:24:53 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-13 23:24:53.148029 | orchestrator | 2025-05-13 23:24:53 | INFO  | Please wait and do not abort execution. 2025-05-13 23:24:53.148736 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-13 23:24:53.149564 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-13 23:24:53.150420 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-13 23:24:53.151138 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-13 23:24:53.152074 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-13 23:24:53.152266 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-13 23:24:53.152889 | orchestrator | 2025-05-13 23:24:53.153621 | orchestrator | 2025-05-13 23:24:53.154748 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 23:24:53.156306 | orchestrator | Tuesday 13 May 2025 23:24:53 +0000 (0:00:00.035) 0:00:06.081 *********** 2025-05-13 23:24:53.156852 | orchestrator | =============================================================================== 2025-05-13 23:24:53.157413 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.40s 2025-05-13 23:24:53.158335 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.79s 2025-05-13 23:24:53.159379 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.68s 2025-05-13 23:24:53.779285 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-05-13 23:24:55.543557 | orchestrator | 2025-05-13 23:24:55 | INFO  | Task 18b519f7-7a10-458d-b957-b460fc6e285e (wait-for-connection) was prepared for execution. 2025-05-13 23:24:55.543732 | orchestrator | 2025-05-13 23:24:55 | INFO  | It takes a moment until task 18b519f7-7a10-458d-b957-b460fc6e285e (wait-for-connection) has been started and output is visible here. 2025-05-13 23:24:59.630550 | orchestrator | 2025-05-13 23:24:59.630940 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-05-13 23:24:59.636262 | orchestrator | 2025-05-13 23:24:59.638560 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-05-13 23:24:59.639234 | orchestrator | Tuesday 13 May 2025 23:24:59 +0000 (0:00:00.217) 0:00:00.217 *********** 2025-05-13 23:25:12.074632 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:25:12.074779 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:25:12.074799 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:25:12.074813 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:25:12.074824 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:25:12.076501 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:25:12.077068 | orchestrator | 2025-05-13 23:25:12.077805 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 23:25:12.079201 | orchestrator | 2025-05-13 23:25:12 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-13 23:25:12.079425 | orchestrator | 2025-05-13 23:25:12 | INFO  | Please wait and do not abort execution. 2025-05-13 23:25:12.080420 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 23:25:12.080457 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 23:25:12.080873 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 23:25:12.081358 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 23:25:12.081633 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 23:25:12.082264 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 23:25:12.082780 | orchestrator | 2025-05-13 23:25:12.083761 | orchestrator | 2025-05-13 23:25:12.084352 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 23:25:12.084890 | orchestrator | Tuesday 13 May 2025 23:25:12 +0000 (0:00:12.440) 0:00:12.658 *********** 2025-05-13 23:25:12.085535 | orchestrator | =============================================================================== 2025-05-13 23:25:12.085978 | orchestrator | Wait until remote system is reachable ---------------------------------- 12.44s 2025-05-13 23:25:12.708249 | orchestrator | + osism apply hddtemp 2025-05-13 23:25:14.478774 | orchestrator | 2025-05-13 23:25:14 | INFO  | Task d1c90c8b-e745-42d6-9bae-a63834fb73bd (hddtemp) was prepared for execution. 2025-05-13 23:25:14.478848 | orchestrator | 2025-05-13 23:25:14 | INFO  | It takes a moment until task d1c90c8b-e745-42d6-9bae-a63834fb73bd (hddtemp) has been started and output is visible here. 2025-05-13 23:25:18.711528 | orchestrator | 2025-05-13 23:25:18.713980 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-05-13 23:25:18.714060 | orchestrator | 2025-05-13 23:25:18.715043 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-05-13 23:25:18.716475 | orchestrator | Tuesday 13 May 2025 23:25:18 +0000 (0:00:00.291) 0:00:00.291 *********** 2025-05-13 23:25:18.864298 | orchestrator | ok: [testbed-manager] 2025-05-13 23:25:18.948182 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:25:19.024595 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:25:19.102129 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:25:19.265550 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:25:19.384228 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:25:19.387046 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:25:19.387190 | orchestrator | 2025-05-13 23:25:19.387259 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-05-13 23:25:19.387773 | orchestrator | Tuesday 13 May 2025 23:25:19 +0000 (0:00:00.673) 0:00:00.965 *********** 2025-05-13 23:25:20.447997 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:25:20.451129 | orchestrator | 2025-05-13 23:25:20.451164 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-05-13 23:25:20.451170 | orchestrator | Tuesday 13 May 2025 23:25:20 +0000 (0:00:01.063) 0:00:02.029 *********** 2025-05-13 23:25:22.385924 | orchestrator | ok: [testbed-manager] 2025-05-13 23:25:22.388157 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:25:22.388194 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:25:22.388206 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:25:22.388217 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:25:22.389896 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:25:22.391174 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:25:22.392292 | orchestrator | 2025-05-13 23:25:22.392806 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-05-13 23:25:22.393506 | orchestrator | Tuesday 13 May 2025 23:25:22 +0000 (0:00:01.938) 0:00:03.968 *********** 2025-05-13 23:25:22.944796 | orchestrator | changed: [testbed-manager] 2025-05-13 23:25:23.031613 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:25:23.475529 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:25:23.476225 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:25:23.476821 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:25:23.477751 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:25:23.478796 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:25:23.479283 | orchestrator | 2025-05-13 23:25:23.480388 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-05-13 23:25:23.480707 | orchestrator | Tuesday 13 May 2025 23:25:23 +0000 (0:00:01.085) 0:00:05.054 *********** 2025-05-13 23:25:24.748953 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:25:24.749413 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:25:24.750446 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:25:24.752093 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:25:24.754152 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:25:24.754243 | orchestrator | ok: [testbed-manager] 2025-05-13 23:25:24.754813 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:25:24.755781 | orchestrator | 2025-05-13 23:25:24.756353 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-05-13 23:25:24.756889 | orchestrator | Tuesday 13 May 2025 23:25:24 +0000 (0:00:01.274) 0:00:06.328 *********** 2025-05-13 23:25:25.000732 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:25:25.094410 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:25:25.181560 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:25:25.283338 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:25:25.415797 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:25:25.417096 | orchestrator | changed: [testbed-manager] 2025-05-13 23:25:25.418281 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:25:25.418342 | orchestrator | 2025-05-13 23:25:25.420519 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-05-13 23:25:25.421028 | orchestrator | Tuesday 13 May 2025 23:25:25 +0000 (0:00:00.670) 0:00:06.999 *********** 2025-05-13 23:25:38.215215 | orchestrator | changed: [testbed-manager] 2025-05-13 23:25:38.215334 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:25:38.217212 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:25:38.217253 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:25:38.218640 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:25:38.219342 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:25:38.220802 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:25:38.221786 | orchestrator | 2025-05-13 23:25:38.221894 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-05-13 23:25:38.223736 | orchestrator | Tuesday 13 May 2025 23:25:38 +0000 (0:00:12.792) 0:00:19.791 *********** 2025-05-13 23:25:39.518757 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:25:39.518881 | orchestrator | 2025-05-13 23:25:39.522168 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-05-13 23:25:39.522211 | orchestrator | Tuesday 13 May 2025 23:25:39 +0000 (0:00:01.305) 0:00:21.097 *********** 2025-05-13 23:25:41.398927 | orchestrator | changed: [testbed-manager] 2025-05-13 23:25:41.399289 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:25:41.400261 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:25:41.402331 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:25:41.403371 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:25:41.404362 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:25:41.405226 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:25:41.405977 | orchestrator | 2025-05-13 23:25:41.407602 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 23:25:41.407649 | orchestrator | 2025-05-13 23:25:41 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-13 23:25:41.407663 | orchestrator | 2025-05-13 23:25:41 | INFO  | Please wait and do not abort execution. 2025-05-13 23:25:41.409020 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 23:25:41.409660 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-13 23:25:41.410237 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-13 23:25:41.411376 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-13 23:25:41.411860 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-13 23:25:41.412797 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-13 23:25:41.413424 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-13 23:25:41.414161 | orchestrator | 2025-05-13 23:25:41.414656 | orchestrator | 2025-05-13 23:25:41.415841 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 23:25:41.415945 | orchestrator | Tuesday 13 May 2025 23:25:41 +0000 (0:00:01.883) 0:00:22.981 *********** 2025-05-13 23:25:41.416538 | orchestrator | =============================================================================== 2025-05-13 23:25:41.417152 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.79s 2025-05-13 23:25:41.417640 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.94s 2025-05-13 23:25:41.418540 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.88s 2025-05-13 23:25:41.418862 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.31s 2025-05-13 23:25:41.419502 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.27s 2025-05-13 23:25:41.420035 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.09s 2025-05-13 23:25:41.420607 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.06s 2025-05-13 23:25:41.420891 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.67s 2025-05-13 23:25:41.421339 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.67s 2025-05-13 23:25:42.022477 | orchestrator | + sudo systemctl restart docker-compose@manager 2025-05-13 23:25:43.568870 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-05-13 23:25:43.569005 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-05-13 23:25:43.569028 | orchestrator | + local max_attempts=60 2025-05-13 23:25:43.569041 | orchestrator | + local name=ceph-ansible 2025-05-13 23:25:43.569052 | orchestrator | + local attempt_num=1 2025-05-13 23:25:43.569166 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-05-13 23:25:43.602937 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-13 23:25:43.603066 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-05-13 23:25:43.603080 | orchestrator | + local max_attempts=60 2025-05-13 23:25:43.603092 | orchestrator | + local name=kolla-ansible 2025-05-13 23:25:43.603103 | orchestrator | + local attempt_num=1 2025-05-13 23:25:43.603182 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-05-13 23:25:43.632651 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-13 23:25:43.632776 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-05-13 23:25:43.632790 | orchestrator | + local max_attempts=60 2025-05-13 23:25:43.632802 | orchestrator | + local name=osism-ansible 2025-05-13 23:25:43.632813 | orchestrator | + local attempt_num=1 2025-05-13 23:25:43.632824 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-05-13 23:25:43.657330 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-13 23:25:43.657425 | orchestrator | + [[ true == \t\r\u\e ]] 2025-05-13 23:25:43.657438 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-05-13 23:25:43.851293 | orchestrator | ARA in ceph-ansible already disabled. 2025-05-13 23:25:44.028623 | orchestrator | ARA in kolla-ansible already disabled. 2025-05-13 23:25:44.194228 | orchestrator | ARA in osism-ansible already disabled. 2025-05-13 23:25:44.395313 | orchestrator | ARA in osism-kubernetes already disabled. 2025-05-13 23:25:44.396026 | orchestrator | + osism apply gather-facts 2025-05-13 23:25:46.146475 | orchestrator | 2025-05-13 23:25:46 | INFO  | Task 6ec190b7-d411-4cda-a8e7-b08f97484bf8 (gather-facts) was prepared for execution. 2025-05-13 23:25:46.146579 | orchestrator | 2025-05-13 23:25:46 | INFO  | It takes a moment until task 6ec190b7-d411-4cda-a8e7-b08f97484bf8 (gather-facts) has been started and output is visible here. 2025-05-13 23:25:50.228309 | orchestrator | 2025-05-13 23:25:50.229007 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-13 23:25:50.233864 | orchestrator | 2025-05-13 23:25:50.234209 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-13 23:25:50.235967 | orchestrator | Tuesday 13 May 2025 23:25:50 +0000 (0:00:00.233) 0:00:00.233 *********** 2025-05-13 23:25:55.474422 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:25:55.474539 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:25:55.476196 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:25:55.479901 | orchestrator | ok: [testbed-manager] 2025-05-13 23:25:55.481615 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:25:55.482974 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:25:55.483503 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:25:55.484459 | orchestrator | 2025-05-13 23:25:55.485005 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-05-13 23:25:55.486209 | orchestrator | 2025-05-13 23:25:55.487240 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-05-13 23:25:55.487339 | orchestrator | Tuesday 13 May 2025 23:25:55 +0000 (0:00:05.248) 0:00:05.481 *********** 2025-05-13 23:25:55.630884 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:25:55.707205 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:25:55.785572 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:25:55.869423 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:25:55.939203 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:25:55.977190 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:25:55.978397 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:25:55.978432 | orchestrator | 2025-05-13 23:25:55.979150 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 23:25:55.979941 | orchestrator | 2025-05-13 23:25:55 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-13 23:25:55.979964 | orchestrator | 2025-05-13 23:25:55 | INFO  | Please wait and do not abort execution. 2025-05-13 23:25:55.980712 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-13 23:25:55.981183 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-13 23:25:55.981779 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-13 23:25:55.982616 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-13 23:25:55.983752 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-13 23:25:55.983957 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-13 23:25:55.985760 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-13 23:25:55.986773 | orchestrator | 2025-05-13 23:25:55.987321 | orchestrator | 2025-05-13 23:25:55.988016 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 23:25:55.988470 | orchestrator | Tuesday 13 May 2025 23:25:55 +0000 (0:00:00.504) 0:00:05.986 *********** 2025-05-13 23:25:55.989187 | orchestrator | =============================================================================== 2025-05-13 23:25:55.989703 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.25s 2025-05-13 23:25:55.990230 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.50s 2025-05-13 23:25:56.632853 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-05-13 23:25:56.652525 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-05-13 23:25:56.672661 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-05-13 23:25:56.692177 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-05-13 23:25:56.709504 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-05-13 23:25:56.727806 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-05-13 23:25:56.748198 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-05-13 23:25:56.769139 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-05-13 23:25:56.791664 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-05-13 23:25:56.809982 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-05-13 23:25:56.825016 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-05-13 23:25:56.843908 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-05-13 23:25:56.858294 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-05-13 23:25:56.880838 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-05-13 23:25:56.908791 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-05-13 23:25:56.928448 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-05-13 23:25:56.940967 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-05-13 23:25:56.955009 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-05-13 23:25:56.975584 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-05-13 23:25:56.991207 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-05-13 23:25:57.003547 | orchestrator | + [[ false == \t\r\u\e ]] 2025-05-13 23:25:57.216272 | orchestrator | ok: Runtime: 0:33:26.350937 2025-05-13 23:25:57.291490 | 2025-05-13 23:25:57.291622 | TASK [Deploy services] 2025-05-13 23:25:57.825970 | orchestrator | skipping: Conditional result was False 2025-05-13 23:25:57.846576 | 2025-05-13 23:25:57.846766 | TASK [Deploy in a nutshell] 2025-05-13 23:25:58.543148 | orchestrator | + set -e 2025-05-13 23:25:58.543386 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-13 23:25:58.543415 | orchestrator | ++ export INTERACTIVE=false 2025-05-13 23:25:58.543452 | orchestrator | ++ INTERACTIVE=false 2025-05-13 23:25:58.543473 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-13 23:25:58.543486 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-13 23:25:58.543499 | orchestrator | + source /opt/manager-vars.sh 2025-05-13 23:25:58.543578 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-05-13 23:25:58.543615 | orchestrator | ++ NUMBER_OF_NODES=6 2025-05-13 23:25:58.543630 | orchestrator | ++ export CEPH_VERSION=reef 2025-05-13 23:25:58.543646 | orchestrator | ++ CEPH_VERSION=reef 2025-05-13 23:25:58.543703 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-05-13 23:25:58.543725 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-05-13 23:25:58.543737 | orchestrator | ++ export MANAGER_VERSION=latest 2025-05-13 23:25:58.543777 | orchestrator | ++ MANAGER_VERSION=latest 2025-05-13 23:25:58.543795 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-05-13 23:25:58.543810 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-05-13 23:25:58.543821 | orchestrator | ++ export ARA=false 2025-05-13 23:25:58.543833 | orchestrator | ++ ARA=false 2025-05-13 23:25:58.543844 | orchestrator | ++ export TEMPEST=false 2025-05-13 23:25:58.543857 | orchestrator | ++ TEMPEST=false 2025-05-13 23:25:58.543868 | orchestrator | ++ export IS_ZUUL=true 2025-05-13 23:25:58.543878 | orchestrator | ++ IS_ZUUL=true 2025-05-13 23:25:58.543890 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.58 2025-05-13 23:25:58.543900 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.58 2025-05-13 23:25:58.543911 | orchestrator | ++ export EXTERNAL_API=false 2025-05-13 23:25:58.543922 | orchestrator | ++ EXTERNAL_API=false 2025-05-13 23:25:58.543932 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-05-13 23:25:58.543943 | orchestrator | ++ IMAGE_USER=ubuntu 2025-05-13 23:25:58.543953 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-05-13 23:25:58.543964 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-05-13 23:25:58.543975 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-05-13 23:25:58.543986 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-05-13 23:25:58.543997 | orchestrator | + echo 2025-05-13 23:25:58.544009 | orchestrator | 2025-05-13 23:25:58.544020 | orchestrator | # PULL IMAGES 2025-05-13 23:25:58.544031 | orchestrator | 2025-05-13 23:25:58.544042 | orchestrator | + echo '# PULL IMAGES' 2025-05-13 23:25:58.544053 | orchestrator | + echo 2025-05-13 23:25:58.545065 | orchestrator | ++ semver latest 7.0.0 2025-05-13 23:25:58.599506 | orchestrator | + [[ -1 -ge 0 ]] 2025-05-13 23:25:58.599600 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-05-13 23:25:58.599614 | orchestrator | + osism apply -r 2 -e custom pull-images 2025-05-13 23:26:00.412906 | orchestrator | 2025-05-13 23:26:00 | INFO  | Trying to run play pull-images in environment custom 2025-05-13 23:26:00.473582 | orchestrator | 2025-05-13 23:26:00 | INFO  | Task a373939e-082f-42b1-8f1e-c4ffb9f8e1bc (pull-images) was prepared for execution. 2025-05-13 23:26:00.473716 | orchestrator | 2025-05-13 23:26:00 | INFO  | It takes a moment until task a373939e-082f-42b1-8f1e-c4ffb9f8e1bc (pull-images) has been started and output is visible here. 2025-05-13 23:26:04.492143 | orchestrator | 2025-05-13 23:26:04.492453 | orchestrator | PLAY [Pull images] ************************************************************* 2025-05-13 23:26:04.493394 | orchestrator | 2025-05-13 23:26:04.494178 | orchestrator | TASK [Pull keystone image] ***************************************************** 2025-05-13 23:26:04.495196 | orchestrator | Tuesday 13 May 2025 23:26:04 +0000 (0:00:00.167) 0:00:00.167 *********** 2025-05-13 23:27:17.320627 | orchestrator | changed: [testbed-manager] 2025-05-13 23:27:17.320797 | orchestrator | 2025-05-13 23:27:17.320820 | orchestrator | TASK [Pull other images] ******************************************************* 2025-05-13 23:27:17.320834 | orchestrator | Tuesday 13 May 2025 23:27:17 +0000 (0:01:12.827) 0:01:12.995 *********** 2025-05-13 23:28:13.808511 | orchestrator | changed: [testbed-manager] => (item=aodh) 2025-05-13 23:28:13.808655 | orchestrator | changed: [testbed-manager] => (item=barbican) 2025-05-13 23:28:13.808690 | orchestrator | changed: [testbed-manager] => (item=ceilometer) 2025-05-13 23:28:13.808759 | orchestrator | changed: [testbed-manager] => (item=cinder) 2025-05-13 23:28:13.808774 | orchestrator | changed: [testbed-manager] => (item=common) 2025-05-13 23:28:13.808786 | orchestrator | changed: [testbed-manager] => (item=designate) 2025-05-13 23:28:13.809771 | orchestrator | changed: [testbed-manager] => (item=glance) 2025-05-13 23:28:13.810272 | orchestrator | changed: [testbed-manager] => (item=grafana) 2025-05-13 23:28:13.810769 | orchestrator | changed: [testbed-manager] => (item=horizon) 2025-05-13 23:28:13.811844 | orchestrator | changed: [testbed-manager] => (item=ironic) 2025-05-13 23:28:13.812105 | orchestrator | changed: [testbed-manager] => (item=loadbalancer) 2025-05-13 23:28:13.813269 | orchestrator | changed: [testbed-manager] => (item=magnum) 2025-05-13 23:28:13.814677 | orchestrator | changed: [testbed-manager] => (item=mariadb) 2025-05-13 23:28:13.814725 | orchestrator | changed: [testbed-manager] => (item=memcached) 2025-05-13 23:28:13.815069 | orchestrator | changed: [testbed-manager] => (item=neutron) 2025-05-13 23:28:13.815872 | orchestrator | changed: [testbed-manager] => (item=nova) 2025-05-13 23:28:13.816200 | orchestrator | changed: [testbed-manager] => (item=octavia) 2025-05-13 23:28:13.816648 | orchestrator | changed: [testbed-manager] => (item=opensearch) 2025-05-13 23:28:13.817508 | orchestrator | changed: [testbed-manager] => (item=openvswitch) 2025-05-13 23:28:13.817787 | orchestrator | changed: [testbed-manager] => (item=ovn) 2025-05-13 23:28:13.818616 | orchestrator | changed: [testbed-manager] => (item=placement) 2025-05-13 23:28:13.820520 | orchestrator | changed: [testbed-manager] => (item=rabbitmq) 2025-05-13 23:28:13.821224 | orchestrator | changed: [testbed-manager] => (item=redis) 2025-05-13 23:28:13.822093 | orchestrator | changed: [testbed-manager] => (item=skyline) 2025-05-13 23:28:13.822602 | orchestrator | 2025-05-13 23:28:13.823254 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 23:28:13.823576 | orchestrator | 2025-05-13 23:28:13 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-13 23:28:13.823598 | orchestrator | 2025-05-13 23:28:13 | INFO  | Please wait and do not abort execution. 2025-05-13 23:28:13.824316 | orchestrator | testbed-manager : ok=2  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 23:28:13.825054 | orchestrator | 2025-05-13 23:28:13.825262 | orchestrator | 2025-05-13 23:28:13.826177 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 23:28:13.826203 | orchestrator | Tuesday 13 May 2025 23:28:13 +0000 (0:00:56.487) 0:02:09.482 *********** 2025-05-13 23:28:13.826610 | orchestrator | =============================================================================== 2025-05-13 23:28:13.827792 | orchestrator | Pull keystone image ---------------------------------------------------- 72.83s 2025-05-13 23:28:13.828416 | orchestrator | Pull other images ------------------------------------------------------ 56.49s 2025-05-13 23:28:16.184782 | orchestrator | 2025-05-13 23:28:16 | INFO  | Trying to run play wipe-partitions in environment custom 2025-05-13 23:28:16.245406 | orchestrator | 2025-05-13 23:28:16 | INFO  | Task e590598d-5fbd-4554-89c9-67533664a98a (wipe-partitions) was prepared for execution. 2025-05-13 23:28:16.245523 | orchestrator | 2025-05-13 23:28:16 | INFO  | It takes a moment until task e590598d-5fbd-4554-89c9-67533664a98a (wipe-partitions) has been started and output is visible here. 2025-05-13 23:28:20.339562 | orchestrator | 2025-05-13 23:28:20.339744 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-05-13 23:28:20.341607 | orchestrator | 2025-05-13 23:28:20.341982 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-05-13 23:28:20.343920 | orchestrator | Tuesday 13 May 2025 23:28:20 +0000 (0:00:00.143) 0:00:00.143 *********** 2025-05-13 23:28:20.929937 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:28:20.930101 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:28:20.931037 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:28:20.931081 | orchestrator | 2025-05-13 23:28:20.931382 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-05-13 23:28:20.931823 | orchestrator | Tuesday 13 May 2025 23:28:20 +0000 (0:00:00.591) 0:00:00.735 *********** 2025-05-13 23:28:21.106152 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:28:21.208886 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:28:21.208987 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:28:21.209389 | orchestrator | 2025-05-13 23:28:21.210219 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-05-13 23:28:21.211148 | orchestrator | Tuesday 13 May 2025 23:28:21 +0000 (0:00:00.276) 0:00:01.012 *********** 2025-05-13 23:28:21.956717 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:28:21.956812 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:28:21.956822 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:28:21.956878 | orchestrator | 2025-05-13 23:28:21.956890 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-05-13 23:28:21.956899 | orchestrator | Tuesday 13 May 2025 23:28:21 +0000 (0:00:00.749) 0:00:01.761 *********** 2025-05-13 23:28:22.099299 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:28:22.193069 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:28:22.195706 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:28:22.198533 | orchestrator | 2025-05-13 23:28:22.198572 | orchestrator | TASK [Check device availability] *********************************************** 2025-05-13 23:28:22.200258 | orchestrator | Tuesday 13 May 2025 23:28:22 +0000 (0:00:00.238) 0:00:02.000 *********** 2025-05-13 23:28:23.371105 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-05-13 23:28:23.372217 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-05-13 23:28:23.372272 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-05-13 23:28:23.372869 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-05-13 23:28:23.374418 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-05-13 23:28:23.374541 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-05-13 23:28:23.375399 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-05-13 23:28:23.376124 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-05-13 23:28:23.377307 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-05-13 23:28:23.377942 | orchestrator | 2025-05-13 23:28:23.381546 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-05-13 23:28:23.381654 | orchestrator | Tuesday 13 May 2025 23:28:23 +0000 (0:00:01.176) 0:00:03.176 *********** 2025-05-13 23:28:24.746773 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-05-13 23:28:24.750384 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-05-13 23:28:24.751109 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-05-13 23:28:24.752028 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-05-13 23:28:24.753464 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-05-13 23:28:24.754183 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-05-13 23:28:24.755977 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-05-13 23:28:24.757188 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-05-13 23:28:24.758456 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-05-13 23:28:24.759157 | orchestrator | 2025-05-13 23:28:24.760007 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-05-13 23:28:24.761287 | orchestrator | Tuesday 13 May 2025 23:28:24 +0000 (0:00:01.372) 0:00:04.548 *********** 2025-05-13 23:28:26.901071 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-05-13 23:28:26.901201 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-05-13 23:28:26.902540 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-05-13 23:28:26.904155 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-05-13 23:28:26.906192 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-05-13 23:28:26.910207 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-05-13 23:28:26.910961 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-05-13 23:28:26.912227 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-05-13 23:28:26.914214 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-05-13 23:28:26.914745 | orchestrator | 2025-05-13 23:28:26.918818 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-05-13 23:28:26.919623 | orchestrator | Tuesday 13 May 2025 23:28:26 +0000 (0:00:02.157) 0:00:06.706 *********** 2025-05-13 23:28:27.507946 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:28:27.508825 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:28:27.512710 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:28:27.513933 | orchestrator | 2025-05-13 23:28:27.515324 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-05-13 23:28:27.516254 | orchestrator | Tuesday 13 May 2025 23:28:27 +0000 (0:00:00.606) 0:00:07.313 *********** 2025-05-13 23:28:28.148817 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:28:28.150154 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:28:28.150387 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:28:28.151153 | orchestrator | 2025-05-13 23:28:28.154692 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 23:28:28.157201 | orchestrator | 2025-05-13 23:28:28 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-13 23:28:28.157273 | orchestrator | 2025-05-13 23:28:28 | INFO  | Please wait and do not abort execution. 2025-05-13 23:28:28.163217 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-13 23:28:28.166204 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-13 23:28:28.166256 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-13 23:28:28.166265 | orchestrator | 2025-05-13 23:28:28.166272 | orchestrator | 2025-05-13 23:28:28.166279 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 23:28:28.166288 | orchestrator | Tuesday 13 May 2025 23:28:28 +0000 (0:00:00.641) 0:00:07.954 *********** 2025-05-13 23:28:28.166295 | orchestrator | =============================================================================== 2025-05-13 23:28:28.166634 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.16s 2025-05-13 23:28:28.167032 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.37s 2025-05-13 23:28:28.167459 | orchestrator | Check device availability ----------------------------------------------- 1.18s 2025-05-13 23:28:28.167913 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.75s 2025-05-13 23:28:28.168263 | orchestrator | Request device events from the kernel ----------------------------------- 0.64s 2025-05-13 23:28:28.168598 | orchestrator | Reload udev rules ------------------------------------------------------- 0.61s 2025-05-13 23:28:28.169025 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.59s 2025-05-13 23:28:28.169351 | orchestrator | Remove all rook related logical devices --------------------------------- 0.28s 2025-05-13 23:28:28.169775 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.24s 2025-05-13 23:28:30.640524 | orchestrator | 2025-05-13 23:28:30 | INFO  | Task 932aa592-a569-47da-88ea-9e627ac825a1 (facts) was prepared for execution. 2025-05-13 23:28:30.640657 | orchestrator | 2025-05-13 23:28:30 | INFO  | It takes a moment until task 932aa592-a569-47da-88ea-9e627ac825a1 (facts) has been started and output is visible here. 2025-05-13 23:28:34.820840 | orchestrator | 2025-05-13 23:28:34.822220 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-05-13 23:28:34.823656 | orchestrator | 2025-05-13 23:28:34.823705 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-05-13 23:28:34.824571 | orchestrator | Tuesday 13 May 2025 23:28:34 +0000 (0:00:00.283) 0:00:00.283 *********** 2025-05-13 23:28:35.923859 | orchestrator | ok: [testbed-manager] 2025-05-13 23:28:35.925325 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:28:35.926212 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:28:35.927454 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:28:35.927463 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:28:35.928724 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:28:35.929686 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:28:35.930336 | orchestrator | 2025-05-13 23:28:35.931001 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-05-13 23:28:35.931792 | orchestrator | Tuesday 13 May 2025 23:28:35 +0000 (0:00:01.099) 0:00:01.383 *********** 2025-05-13 23:28:36.106648 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:28:36.191067 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:28:36.275374 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:28:36.352356 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:28:36.427908 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:28:37.186792 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:28:37.191512 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:28:37.193240 | orchestrator | 2025-05-13 23:28:37.197750 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-13 23:28:37.198262 | orchestrator | 2025-05-13 23:28:37.199283 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-13 23:28:37.202071 | orchestrator | Tuesday 13 May 2025 23:28:37 +0000 (0:00:01.265) 0:00:02.649 *********** 2025-05-13 23:28:41.911298 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:28:41.911404 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:28:41.911686 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:28:41.913314 | orchestrator | ok: [testbed-manager] 2025-05-13 23:28:41.913409 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:28:41.913882 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:28:41.916254 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:28:41.916508 | orchestrator | 2025-05-13 23:28:41.917029 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-05-13 23:28:41.918114 | orchestrator | 2025-05-13 23:28:41.919398 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-05-13 23:28:41.919702 | orchestrator | Tuesday 13 May 2025 23:28:41 +0000 (0:00:04.723) 0:00:07.373 *********** 2025-05-13 23:28:42.046950 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:28:42.110882 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:28:42.177859 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:28:42.249808 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:28:42.321338 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:28:42.354828 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:28:42.355015 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:28:42.355730 | orchestrator | 2025-05-13 23:28:42.356495 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 23:28:42.357115 | orchestrator | 2025-05-13 23:28:42 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-13 23:28:42.357487 | orchestrator | 2025-05-13 23:28:42 | INFO  | Please wait and do not abort execution. 2025-05-13 23:28:42.358223 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-13 23:28:42.358764 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-13 23:28:42.359145 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-13 23:28:42.359604 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-13 23:28:42.360107 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-13 23:28:42.360495 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-13 23:28:42.361049 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-13 23:28:42.361496 | orchestrator | 2025-05-13 23:28:42.362093 | orchestrator | 2025-05-13 23:28:42.362630 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 23:28:42.363055 | orchestrator | Tuesday 13 May 2025 23:28:42 +0000 (0:00:00.447) 0:00:07.821 *********** 2025-05-13 23:28:42.363577 | orchestrator | =============================================================================== 2025-05-13 23:28:42.364249 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.72s 2025-05-13 23:28:42.364522 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.27s 2025-05-13 23:28:42.365184 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.10s 2025-05-13 23:28:42.365518 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.45s 2025-05-13 23:28:44.873639 | orchestrator | 2025-05-13 23:28:44 | INFO  | Task 903cd396-2fc0-47ef-a5c5-b5eccb6e34a8 (ceph-configure-lvm-volumes) was prepared for execution. 2025-05-13 23:28:44.873718 | orchestrator | 2025-05-13 23:28:44 | INFO  | It takes a moment until task 903cd396-2fc0-47ef-a5c5-b5eccb6e34a8 (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-05-13 23:28:49.325202 | orchestrator | 2025-05-13 23:28:49.325444 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-05-13 23:28:49.325859 | orchestrator | 2025-05-13 23:28:49.326449 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-13 23:28:49.327050 | orchestrator | Tuesday 13 May 2025 23:28:49 +0000 (0:00:00.353) 0:00:00.353 *********** 2025-05-13 23:28:49.572730 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-13 23:28:49.572829 | orchestrator | 2025-05-13 23:28:49.573105 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-13 23:28:49.575024 | orchestrator | Tuesday 13 May 2025 23:28:49 +0000 (0:00:00.247) 0:00:00.600 *********** 2025-05-13 23:28:49.792786 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:28:49.792880 | orchestrator | 2025-05-13 23:28:49.794231 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:28:49.797506 | orchestrator | Tuesday 13 May 2025 23:28:49 +0000 (0:00:00.221) 0:00:00.822 *********** 2025-05-13 23:28:50.175291 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-05-13 23:28:50.176197 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-05-13 23:28:50.179056 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-05-13 23:28:50.184511 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-05-13 23:28:50.187713 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-05-13 23:28:50.187740 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-05-13 23:28:50.187753 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-05-13 23:28:50.190444 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-05-13 23:28:50.191492 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-05-13 23:28:50.191514 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-05-13 23:28:50.193252 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-05-13 23:28:50.194382 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-05-13 23:28:50.194493 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-05-13 23:28:50.194854 | orchestrator | 2025-05-13 23:28:50.195604 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:28:50.195972 | orchestrator | Tuesday 13 May 2025 23:28:50 +0000 (0:00:00.382) 0:00:01.205 *********** 2025-05-13 23:28:50.690666 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:28:50.692296 | orchestrator | 2025-05-13 23:28:50.692757 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:28:50.693592 | orchestrator | Tuesday 13 May 2025 23:28:50 +0000 (0:00:00.514) 0:00:01.719 *********** 2025-05-13 23:28:50.876060 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:28:50.878383 | orchestrator | 2025-05-13 23:28:50.880409 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:28:50.885279 | orchestrator | Tuesday 13 May 2025 23:28:50 +0000 (0:00:00.188) 0:00:01.907 *********** 2025-05-13 23:28:51.095149 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:28:51.095881 | orchestrator | 2025-05-13 23:28:51.096869 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:28:51.098159 | orchestrator | Tuesday 13 May 2025 23:28:51 +0000 (0:00:00.218) 0:00:02.126 *********** 2025-05-13 23:28:51.282057 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:28:51.286360 | orchestrator | 2025-05-13 23:28:51.291287 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:28:51.291317 | orchestrator | Tuesday 13 May 2025 23:28:51 +0000 (0:00:00.184) 0:00:02.310 *********** 2025-05-13 23:28:51.459541 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:28:51.462211 | orchestrator | 2025-05-13 23:28:51.462272 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:28:51.464212 | orchestrator | Tuesday 13 May 2025 23:28:51 +0000 (0:00:00.179) 0:00:02.490 *********** 2025-05-13 23:28:51.665369 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:28:51.666684 | orchestrator | 2025-05-13 23:28:51.669875 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:28:51.670559 | orchestrator | Tuesday 13 May 2025 23:28:51 +0000 (0:00:00.204) 0:00:02.695 *********** 2025-05-13 23:28:51.875374 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:28:51.876548 | orchestrator | 2025-05-13 23:28:51.882450 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:28:51.883096 | orchestrator | Tuesday 13 May 2025 23:28:51 +0000 (0:00:00.210) 0:00:02.905 *********** 2025-05-13 23:28:52.092874 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:28:52.094880 | orchestrator | 2025-05-13 23:28:52.101689 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:28:52.101916 | orchestrator | Tuesday 13 May 2025 23:28:52 +0000 (0:00:00.217) 0:00:03.123 *********** 2025-05-13 23:28:52.502329 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_c6453a8e-6632-42ad-a179-435c946212ec) 2025-05-13 23:28:52.503801 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_c6453a8e-6632-42ad-a179-435c946212ec) 2025-05-13 23:28:52.505124 | orchestrator | 2025-05-13 23:28:52.506251 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:28:52.507541 | orchestrator | Tuesday 13 May 2025 23:28:52 +0000 (0:00:00.407) 0:00:03.531 *********** 2025-05-13 23:28:52.953995 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_2123f305-4e6b-4736-99ab-18aaa07aaf45) 2025-05-13 23:28:52.955581 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_2123f305-4e6b-4736-99ab-18aaa07aaf45) 2025-05-13 23:28:52.958226 | orchestrator | 2025-05-13 23:28:52.958353 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:28:52.959337 | orchestrator | Tuesday 13 May 2025 23:28:52 +0000 (0:00:00.452) 0:00:03.983 *********** 2025-05-13 23:28:53.582539 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_46243ec1-9f30-4dd7-b280-49f134625000) 2025-05-13 23:28:53.583802 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_46243ec1-9f30-4dd7-b280-49f134625000) 2025-05-13 23:28:53.584900 | orchestrator | 2025-05-13 23:28:53.586085 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:28:53.586855 | orchestrator | Tuesday 13 May 2025 23:28:53 +0000 (0:00:00.627) 0:00:04.611 *********** 2025-05-13 23:28:54.248241 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_213ab59a-cb73-4407-9705-0b2ca8256438) 2025-05-13 23:28:54.248668 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_213ab59a-cb73-4407-9705-0b2ca8256438) 2025-05-13 23:28:54.250106 | orchestrator | 2025-05-13 23:28:54.252615 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:28:54.253336 | orchestrator | Tuesday 13 May 2025 23:28:54 +0000 (0:00:00.664) 0:00:05.275 *********** 2025-05-13 23:28:54.995995 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-13 23:28:54.998763 | orchestrator | 2025-05-13 23:28:54.998848 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:28:55.001815 | orchestrator | Tuesday 13 May 2025 23:28:54 +0000 (0:00:00.749) 0:00:06.025 *********** 2025-05-13 23:28:55.396417 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-05-13 23:28:55.396592 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-05-13 23:28:55.399226 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-05-13 23:28:55.399663 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-05-13 23:28:55.400362 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-05-13 23:28:55.402230 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-05-13 23:28:55.402595 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-05-13 23:28:55.403097 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-05-13 23:28:55.403592 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-05-13 23:28:55.404126 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-05-13 23:28:55.404526 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-05-13 23:28:55.405096 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-05-13 23:28:55.405452 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-05-13 23:28:55.405921 | orchestrator | 2025-05-13 23:28:55.406279 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:28:55.406681 | orchestrator | Tuesday 13 May 2025 23:28:55 +0000 (0:00:00.401) 0:00:06.427 *********** 2025-05-13 23:28:55.612114 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:28:55.612307 | orchestrator | 2025-05-13 23:28:55.612329 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:28:55.612415 | orchestrator | Tuesday 13 May 2025 23:28:55 +0000 (0:00:00.212) 0:00:06.639 *********** 2025-05-13 23:28:55.805593 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:28:55.805886 | orchestrator | 2025-05-13 23:28:55.809848 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:28:55.809878 | orchestrator | Tuesday 13 May 2025 23:28:55 +0000 (0:00:00.197) 0:00:06.837 *********** 2025-05-13 23:28:55.985851 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:28:55.985994 | orchestrator | 2025-05-13 23:28:55.986202 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:28:55.988307 | orchestrator | Tuesday 13 May 2025 23:28:55 +0000 (0:00:00.180) 0:00:07.017 *********** 2025-05-13 23:28:56.174565 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:28:56.176996 | orchestrator | 2025-05-13 23:28:56.177076 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:28:56.177187 | orchestrator | Tuesday 13 May 2025 23:28:56 +0000 (0:00:00.188) 0:00:07.205 *********** 2025-05-13 23:28:56.364381 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:28:56.364577 | orchestrator | 2025-05-13 23:28:56.364788 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:28:56.368374 | orchestrator | Tuesday 13 May 2025 23:28:56 +0000 (0:00:00.189) 0:00:07.395 *********** 2025-05-13 23:28:56.553187 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:28:56.553273 | orchestrator | 2025-05-13 23:28:56.553348 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:28:56.553552 | orchestrator | Tuesday 13 May 2025 23:28:56 +0000 (0:00:00.185) 0:00:07.580 *********** 2025-05-13 23:28:56.727447 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:28:56.727597 | orchestrator | 2025-05-13 23:28:56.729361 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:28:56.730441 | orchestrator | Tuesday 13 May 2025 23:28:56 +0000 (0:00:00.178) 0:00:07.759 *********** 2025-05-13 23:28:56.909898 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:28:56.913742 | orchestrator | 2025-05-13 23:28:56.913891 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:28:56.914141 | orchestrator | Tuesday 13 May 2025 23:28:56 +0000 (0:00:00.180) 0:00:07.940 *********** 2025-05-13 23:28:58.069502 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-05-13 23:28:58.073419 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-05-13 23:28:58.076878 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-05-13 23:28:58.076897 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-05-13 23:28:58.076903 | orchestrator | 2025-05-13 23:28:58.079704 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:28:58.080999 | orchestrator | Tuesday 13 May 2025 23:28:58 +0000 (0:00:01.157) 0:00:09.097 *********** 2025-05-13 23:28:58.348576 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:28:58.348681 | orchestrator | 2025-05-13 23:28:58.348696 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:28:58.348777 | orchestrator | Tuesday 13 May 2025 23:28:58 +0000 (0:00:00.280) 0:00:09.378 *********** 2025-05-13 23:28:58.569024 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:28:58.569137 | orchestrator | 2025-05-13 23:28:58.569214 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:28:58.569605 | orchestrator | Tuesday 13 May 2025 23:28:58 +0000 (0:00:00.215) 0:00:09.594 *********** 2025-05-13 23:28:58.840053 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:28:58.844204 | orchestrator | 2025-05-13 23:28:58.844275 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:28:58.844289 | orchestrator | Tuesday 13 May 2025 23:28:58 +0000 (0:00:00.275) 0:00:09.869 *********** 2025-05-13 23:28:59.093056 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:28:59.093334 | orchestrator | 2025-05-13 23:28:59.098242 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-05-13 23:28:59.098387 | orchestrator | Tuesday 13 May 2025 23:28:59 +0000 (0:00:00.255) 0:00:10.125 *********** 2025-05-13 23:28:59.266551 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-05-13 23:28:59.268450 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-05-13 23:28:59.268682 | orchestrator | 2025-05-13 23:28:59.270751 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-05-13 23:28:59.270778 | orchestrator | Tuesday 13 May 2025 23:28:59 +0000 (0:00:00.170) 0:00:10.295 *********** 2025-05-13 23:28:59.392744 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:28:59.395643 | orchestrator | 2025-05-13 23:28:59.395683 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-05-13 23:28:59.396029 | orchestrator | Tuesday 13 May 2025 23:28:59 +0000 (0:00:00.129) 0:00:10.424 *********** 2025-05-13 23:28:59.536709 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:28:59.539229 | orchestrator | 2025-05-13 23:28:59.539585 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-05-13 23:28:59.539621 | orchestrator | Tuesday 13 May 2025 23:28:59 +0000 (0:00:00.138) 0:00:10.563 *********** 2025-05-13 23:28:59.682341 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:28:59.682448 | orchestrator | 2025-05-13 23:28:59.682463 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-05-13 23:28:59.682476 | orchestrator | Tuesday 13 May 2025 23:28:59 +0000 (0:00:00.145) 0:00:10.709 *********** 2025-05-13 23:28:59.854383 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:28:59.854484 | orchestrator | 2025-05-13 23:28:59.854501 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-05-13 23:28:59.855425 | orchestrator | Tuesday 13 May 2025 23:28:59 +0000 (0:00:00.171) 0:00:10.881 *********** 2025-05-13 23:29:00.043338 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'cf553414-fd5b-54a4-812a-8e7012220720'}}) 2025-05-13 23:29:00.043436 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9ea6307c-c51b-54ed-aeb4-48fe7d66605c'}}) 2025-05-13 23:29:00.043450 | orchestrator | 2025-05-13 23:29:00.043462 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-05-13 23:29:00.043473 | orchestrator | Tuesday 13 May 2025 23:29:00 +0000 (0:00:00.187) 0:00:11.068 *********** 2025-05-13 23:29:00.187771 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'cf553414-fd5b-54a4-812a-8e7012220720'}})  2025-05-13 23:29:00.187879 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9ea6307c-c51b-54ed-aeb4-48fe7d66605c'}})  2025-05-13 23:29:00.190448 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:29:00.190520 | orchestrator | 2025-05-13 23:29:00.190642 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-05-13 23:29:00.192528 | orchestrator | Tuesday 13 May 2025 23:29:00 +0000 (0:00:00.150) 0:00:11.219 *********** 2025-05-13 23:29:00.476559 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'cf553414-fd5b-54a4-812a-8e7012220720'}})  2025-05-13 23:29:00.476663 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9ea6307c-c51b-54ed-aeb4-48fe7d66605c'}})  2025-05-13 23:29:00.476679 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:29:00.476693 | orchestrator | 2025-05-13 23:29:00.476705 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-05-13 23:29:00.476716 | orchestrator | Tuesday 13 May 2025 23:29:00 +0000 (0:00:00.285) 0:00:11.504 *********** 2025-05-13 23:29:00.591627 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'cf553414-fd5b-54a4-812a-8e7012220720'}})  2025-05-13 23:29:00.591777 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9ea6307c-c51b-54ed-aeb4-48fe7d66605c'}})  2025-05-13 23:29:00.593129 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:29:00.593240 | orchestrator | 2025-05-13 23:29:00.593527 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-05-13 23:29:00.593890 | orchestrator | Tuesday 13 May 2025 23:29:00 +0000 (0:00:00.119) 0:00:11.624 *********** 2025-05-13 23:29:00.719298 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:29:00.720259 | orchestrator | 2025-05-13 23:29:00.720553 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-05-13 23:29:00.723097 | orchestrator | Tuesday 13 May 2025 23:29:00 +0000 (0:00:00.127) 0:00:11.751 *********** 2025-05-13 23:29:00.844107 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:29:00.844277 | orchestrator | 2025-05-13 23:29:00.844431 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-05-13 23:29:00.845949 | orchestrator | Tuesday 13 May 2025 23:29:00 +0000 (0:00:00.124) 0:00:11.876 *********** 2025-05-13 23:29:00.974167 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:29:00.974385 | orchestrator | 2025-05-13 23:29:00.975843 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-05-13 23:29:00.976024 | orchestrator | Tuesday 13 May 2025 23:29:00 +0000 (0:00:00.129) 0:00:12.006 *********** 2025-05-13 23:29:01.084523 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:29:01.085203 | orchestrator | 2025-05-13 23:29:01.086154 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-05-13 23:29:01.086360 | orchestrator | Tuesday 13 May 2025 23:29:01 +0000 (0:00:00.110) 0:00:12.116 *********** 2025-05-13 23:29:01.202055 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:29:01.202165 | orchestrator | 2025-05-13 23:29:01.202260 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-05-13 23:29:01.202473 | orchestrator | Tuesday 13 May 2025 23:29:01 +0000 (0:00:00.114) 0:00:12.231 *********** 2025-05-13 23:29:01.327269 | orchestrator | ok: [testbed-node-3] => { 2025-05-13 23:29:01.327396 | orchestrator |  "ceph_osd_devices": { 2025-05-13 23:29:01.327498 | orchestrator |  "sdb": { 2025-05-13 23:29:01.328157 | orchestrator |  "osd_lvm_uuid": "cf553414-fd5b-54a4-812a-8e7012220720" 2025-05-13 23:29:01.328326 | orchestrator |  }, 2025-05-13 23:29:01.328740 | orchestrator |  "sdc": { 2025-05-13 23:29:01.329182 | orchestrator |  "osd_lvm_uuid": "9ea6307c-c51b-54ed-aeb4-48fe7d66605c" 2025-05-13 23:29:01.329333 | orchestrator |  } 2025-05-13 23:29:01.329802 | orchestrator |  } 2025-05-13 23:29:01.330385 | orchestrator | } 2025-05-13 23:29:01.330557 | orchestrator | 2025-05-13 23:29:01.331042 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-05-13 23:29:01.331129 | orchestrator | Tuesday 13 May 2025 23:29:01 +0000 (0:00:00.128) 0:00:12.359 *********** 2025-05-13 23:29:01.449120 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:29:01.449368 | orchestrator | 2025-05-13 23:29:01.449548 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-05-13 23:29:01.450136 | orchestrator | Tuesday 13 May 2025 23:29:01 +0000 (0:00:00.122) 0:00:12.481 *********** 2025-05-13 23:29:01.584163 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:29:01.585058 | orchestrator | 2025-05-13 23:29:01.586661 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-05-13 23:29:01.587811 | orchestrator | Tuesday 13 May 2025 23:29:01 +0000 (0:00:00.132) 0:00:12.614 *********** 2025-05-13 23:29:01.701883 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:29:01.703081 | orchestrator | 2025-05-13 23:29:01.703433 | orchestrator | TASK [Print configuration data] ************************************************ 2025-05-13 23:29:01.705318 | orchestrator | Tuesday 13 May 2025 23:29:01 +0000 (0:00:00.118) 0:00:12.732 *********** 2025-05-13 23:29:01.914181 | orchestrator | changed: [testbed-node-3] => { 2025-05-13 23:29:01.915829 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-05-13 23:29:01.917115 | orchestrator |  "ceph_osd_devices": { 2025-05-13 23:29:01.918436 | orchestrator |  "sdb": { 2025-05-13 23:29:01.920627 | orchestrator |  "osd_lvm_uuid": "cf553414-fd5b-54a4-812a-8e7012220720" 2025-05-13 23:29:01.921344 | orchestrator |  }, 2025-05-13 23:29:01.922387 | orchestrator |  "sdc": { 2025-05-13 23:29:01.923223 | orchestrator |  "osd_lvm_uuid": "9ea6307c-c51b-54ed-aeb4-48fe7d66605c" 2025-05-13 23:29:01.923679 | orchestrator |  } 2025-05-13 23:29:01.924426 | orchestrator |  }, 2025-05-13 23:29:01.924908 | orchestrator |  "lvm_volumes": [ 2025-05-13 23:29:01.925793 | orchestrator |  { 2025-05-13 23:29:01.926297 | orchestrator |  "data": "osd-block-cf553414-fd5b-54a4-812a-8e7012220720", 2025-05-13 23:29:01.927288 | orchestrator |  "data_vg": "ceph-cf553414-fd5b-54a4-812a-8e7012220720" 2025-05-13 23:29:01.927488 | orchestrator |  }, 2025-05-13 23:29:01.927937 | orchestrator |  { 2025-05-13 23:29:01.928416 | orchestrator |  "data": "osd-block-9ea6307c-c51b-54ed-aeb4-48fe7d66605c", 2025-05-13 23:29:01.929119 | orchestrator |  "data_vg": "ceph-9ea6307c-c51b-54ed-aeb4-48fe7d66605c" 2025-05-13 23:29:01.929700 | orchestrator |  } 2025-05-13 23:29:01.930661 | orchestrator |  ] 2025-05-13 23:29:01.931260 | orchestrator |  } 2025-05-13 23:29:01.932503 | orchestrator | } 2025-05-13 23:29:01.932802 | orchestrator | 2025-05-13 23:29:01.934449 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-05-13 23:29:01.935661 | orchestrator | Tuesday 13 May 2025 23:29:01 +0000 (0:00:00.211) 0:00:12.944 *********** 2025-05-13 23:29:03.943376 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-13 23:29:03.943712 | orchestrator | 2025-05-13 23:29:03.944286 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-05-13 23:29:03.944558 | orchestrator | 2025-05-13 23:29:03.944818 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-13 23:29:03.945224 | orchestrator | Tuesday 13 May 2025 23:29:03 +0000 (0:00:02.025) 0:00:14.969 *********** 2025-05-13 23:29:04.204433 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-05-13 23:29:04.204543 | orchestrator | 2025-05-13 23:29:04.204559 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-13 23:29:04.204573 | orchestrator | Tuesday 13 May 2025 23:29:04 +0000 (0:00:00.264) 0:00:15.233 *********** 2025-05-13 23:29:04.438252 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:29:04.438576 | orchestrator | 2025-05-13 23:29:04.439463 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:29:04.440464 | orchestrator | Tuesday 13 May 2025 23:29:04 +0000 (0:00:00.236) 0:00:15.470 *********** 2025-05-13 23:29:04.851765 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-05-13 23:29:04.851882 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-05-13 23:29:04.851890 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-05-13 23:29:04.851930 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-05-13 23:29:04.852295 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-05-13 23:29:04.852463 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-05-13 23:29:04.852859 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-05-13 23:29:04.853171 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-05-13 23:29:04.853531 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-05-13 23:29:04.854209 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-05-13 23:29:04.854618 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-05-13 23:29:04.856411 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-05-13 23:29:04.856472 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-05-13 23:29:04.856561 | orchestrator | 2025-05-13 23:29:04.856861 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:29:04.857347 | orchestrator | Tuesday 13 May 2025 23:29:04 +0000 (0:00:00.410) 0:00:15.880 *********** 2025-05-13 23:29:05.036287 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:29:05.036397 | orchestrator | 2025-05-13 23:29:05.037544 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:29:05.037582 | orchestrator | Tuesday 13 May 2025 23:29:05 +0000 (0:00:00.183) 0:00:16.063 *********** 2025-05-13 23:29:05.201640 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:29:05.201806 | orchestrator | 2025-05-13 23:29:05.203809 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:29:05.204049 | orchestrator | Tuesday 13 May 2025 23:29:05 +0000 (0:00:00.166) 0:00:16.230 *********** 2025-05-13 23:29:05.381913 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:29:05.384798 | orchestrator | 2025-05-13 23:29:05.385528 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:29:05.385564 | orchestrator | Tuesday 13 May 2025 23:29:05 +0000 (0:00:00.182) 0:00:16.413 *********** 2025-05-13 23:29:05.576615 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:29:05.576800 | orchestrator | 2025-05-13 23:29:05.578188 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:29:05.580946 | orchestrator | Tuesday 13 May 2025 23:29:05 +0000 (0:00:00.194) 0:00:16.607 *********** 2025-05-13 23:29:06.368245 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:29:06.368514 | orchestrator | 2025-05-13 23:29:06.369934 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:29:06.370445 | orchestrator | Tuesday 13 May 2025 23:29:06 +0000 (0:00:00.790) 0:00:17.398 *********** 2025-05-13 23:29:06.568380 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:29:06.569737 | orchestrator | 2025-05-13 23:29:06.569820 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:29:06.570097 | orchestrator | Tuesday 13 May 2025 23:29:06 +0000 (0:00:00.200) 0:00:17.598 *********** 2025-05-13 23:29:06.832367 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:29:06.832764 | orchestrator | 2025-05-13 23:29:06.835355 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:29:06.836789 | orchestrator | Tuesday 13 May 2025 23:29:06 +0000 (0:00:00.264) 0:00:17.863 *********** 2025-05-13 23:29:07.117360 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:29:07.117573 | orchestrator | 2025-05-13 23:29:07.117593 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:29:07.117848 | orchestrator | Tuesday 13 May 2025 23:29:07 +0000 (0:00:00.284) 0:00:18.148 *********** 2025-05-13 23:29:07.651499 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_318ab0b7-de56-4f87-ab50-209f607532c7) 2025-05-13 23:29:07.652519 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_318ab0b7-de56-4f87-ab50-209f607532c7) 2025-05-13 23:29:07.654845 | orchestrator | 2025-05-13 23:29:07.654956 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:29:07.658292 | orchestrator | Tuesday 13 May 2025 23:29:07 +0000 (0:00:00.533) 0:00:18.681 *********** 2025-05-13 23:29:08.102684 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_c475673a-0096-49dd-a2ab-dba7e6677c05) 2025-05-13 23:29:08.102812 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_c475673a-0096-49dd-a2ab-dba7e6677c05) 2025-05-13 23:29:08.104464 | orchestrator | 2025-05-13 23:29:08.105101 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:29:08.105582 | orchestrator | Tuesday 13 May 2025 23:29:08 +0000 (0:00:00.452) 0:00:19.133 *********** 2025-05-13 23:29:08.592164 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_a5357627-6c2a-405a-984b-26b28125b648) 2025-05-13 23:29:08.592584 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_a5357627-6c2a-405a-984b-26b28125b648) 2025-05-13 23:29:08.592622 | orchestrator | 2025-05-13 23:29:08.592969 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:29:08.593058 | orchestrator | Tuesday 13 May 2025 23:29:08 +0000 (0:00:00.489) 0:00:19.623 *********** 2025-05-13 23:29:09.020621 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_0156a383-42b8-4f65-bebb-758e8d549677) 2025-05-13 23:29:09.021351 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_0156a383-42b8-4f65-bebb-758e8d549677) 2025-05-13 23:29:09.023040 | orchestrator | 2025-05-13 23:29:09.024431 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:29:09.026394 | orchestrator | Tuesday 13 May 2025 23:29:09 +0000 (0:00:00.427) 0:00:20.051 *********** 2025-05-13 23:29:09.348378 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-13 23:29:09.350695 | orchestrator | 2025-05-13 23:29:09.350734 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:29:09.351044 | orchestrator | Tuesday 13 May 2025 23:29:09 +0000 (0:00:00.329) 0:00:20.380 *********** 2025-05-13 23:29:09.687666 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-05-13 23:29:09.688289 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-05-13 23:29:09.692699 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-05-13 23:29:09.693384 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-05-13 23:29:09.694259 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-05-13 23:29:09.695370 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-05-13 23:29:09.696216 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-05-13 23:29:09.696953 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-05-13 23:29:09.698192 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-05-13 23:29:09.698965 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-05-13 23:29:09.700250 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-05-13 23:29:09.701415 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-05-13 23:29:09.702150 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-05-13 23:29:09.702826 | orchestrator | 2025-05-13 23:29:09.703626 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:29:09.704117 | orchestrator | Tuesday 13 May 2025 23:29:09 +0000 (0:00:00.337) 0:00:20.718 *********** 2025-05-13 23:29:09.861672 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:29:09.861860 | orchestrator | 2025-05-13 23:29:09.864297 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:29:09.865248 | orchestrator | Tuesday 13 May 2025 23:29:09 +0000 (0:00:00.173) 0:00:20.892 *********** 2025-05-13 23:29:10.348948 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:29:10.349134 | orchestrator | 2025-05-13 23:29:10.349155 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:29:10.349190 | orchestrator | Tuesday 13 May 2025 23:29:10 +0000 (0:00:00.485) 0:00:21.377 *********** 2025-05-13 23:29:10.531175 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:29:10.531830 | orchestrator | 2025-05-13 23:29:10.532958 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:29:10.533770 | orchestrator | Tuesday 13 May 2025 23:29:10 +0000 (0:00:00.184) 0:00:21.562 *********** 2025-05-13 23:29:10.697511 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:29:10.699315 | orchestrator | 2025-05-13 23:29:10.700137 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:29:10.701322 | orchestrator | Tuesday 13 May 2025 23:29:10 +0000 (0:00:00.165) 0:00:21.727 *********** 2025-05-13 23:29:10.854441 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:29:10.854926 | orchestrator | 2025-05-13 23:29:10.855180 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:29:10.855600 | orchestrator | Tuesday 13 May 2025 23:29:10 +0000 (0:00:00.159) 0:00:21.886 *********** 2025-05-13 23:29:11.033430 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:29:11.033796 | orchestrator | 2025-05-13 23:29:11.034174 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:29:11.034452 | orchestrator | Tuesday 13 May 2025 23:29:11 +0000 (0:00:00.179) 0:00:22.066 *********** 2025-05-13 23:29:11.215983 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:29:11.217818 | orchestrator | 2025-05-13 23:29:11.219306 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:29:11.223141 | orchestrator | Tuesday 13 May 2025 23:29:11 +0000 (0:00:00.178) 0:00:22.244 *********** 2025-05-13 23:29:11.365608 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:29:11.366345 | orchestrator | 2025-05-13 23:29:11.366688 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:29:11.369105 | orchestrator | Tuesday 13 May 2025 23:29:11 +0000 (0:00:00.153) 0:00:22.398 *********** 2025-05-13 23:29:11.972839 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-05-13 23:29:11.973583 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-05-13 23:29:11.974176 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-05-13 23:29:11.974898 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-05-13 23:29:11.975356 | orchestrator | 2025-05-13 23:29:11.976171 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:29:11.976604 | orchestrator | Tuesday 13 May 2025 23:29:11 +0000 (0:00:00.605) 0:00:23.004 *********** 2025-05-13 23:29:12.166443 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:29:12.167213 | orchestrator | 2025-05-13 23:29:12.167770 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:29:12.168857 | orchestrator | Tuesday 13 May 2025 23:29:12 +0000 (0:00:00.191) 0:00:23.195 *********** 2025-05-13 23:29:12.382344 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:29:12.383563 | orchestrator | 2025-05-13 23:29:12.384879 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:29:12.386424 | orchestrator | Tuesday 13 May 2025 23:29:12 +0000 (0:00:00.217) 0:00:23.413 *********** 2025-05-13 23:29:12.562379 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:29:12.562939 | orchestrator | 2025-05-13 23:29:12.565154 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:29:12.565196 | orchestrator | Tuesday 13 May 2025 23:29:12 +0000 (0:00:00.180) 0:00:23.593 *********** 2025-05-13 23:29:12.756778 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:29:12.757765 | orchestrator | 2025-05-13 23:29:12.758654 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-05-13 23:29:12.760010 | orchestrator | Tuesday 13 May 2025 23:29:12 +0000 (0:00:00.189) 0:00:23.783 *********** 2025-05-13 23:29:13.131515 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-05-13 23:29:13.131964 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-05-13 23:29:13.134389 | orchestrator | 2025-05-13 23:29:13.134816 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-05-13 23:29:13.137063 | orchestrator | Tuesday 13 May 2025 23:29:13 +0000 (0:00:00.375) 0:00:24.158 *********** 2025-05-13 23:29:13.289885 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:29:13.289988 | orchestrator | 2025-05-13 23:29:13.290366 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-05-13 23:29:13.290654 | orchestrator | Tuesday 13 May 2025 23:29:13 +0000 (0:00:00.160) 0:00:24.319 *********** 2025-05-13 23:29:13.435895 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:29:13.439203 | orchestrator | 2025-05-13 23:29:13.439385 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-05-13 23:29:13.441785 | orchestrator | Tuesday 13 May 2025 23:29:13 +0000 (0:00:00.145) 0:00:24.464 *********** 2025-05-13 23:29:13.578480 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:29:13.578690 | orchestrator | 2025-05-13 23:29:13.579522 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-05-13 23:29:13.580366 | orchestrator | Tuesday 13 May 2025 23:29:13 +0000 (0:00:00.145) 0:00:24.609 *********** 2025-05-13 23:29:13.725516 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:29:13.725624 | orchestrator | 2025-05-13 23:29:13.725641 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-05-13 23:29:13.725767 | orchestrator | Tuesday 13 May 2025 23:29:13 +0000 (0:00:00.141) 0:00:24.751 *********** 2025-05-13 23:29:13.936271 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8f56c737-ae06-5042-be62-d4d7430a3913'}}) 2025-05-13 23:29:13.937916 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b9ab4848-02bd-5b2a-a6cc-ded55503b6b3'}}) 2025-05-13 23:29:13.938904 | orchestrator | 2025-05-13 23:29:13.939944 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-05-13 23:29:13.940684 | orchestrator | Tuesday 13 May 2025 23:29:13 +0000 (0:00:00.215) 0:00:24.966 *********** 2025-05-13 23:29:14.081317 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8f56c737-ae06-5042-be62-d4d7430a3913'}})  2025-05-13 23:29:14.081413 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b9ab4848-02bd-5b2a-a6cc-ded55503b6b3'}})  2025-05-13 23:29:14.083992 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:29:14.085334 | orchestrator | 2025-05-13 23:29:14.087208 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-05-13 23:29:14.089106 | orchestrator | Tuesday 13 May 2025 23:29:14 +0000 (0:00:00.143) 0:00:25.110 *********** 2025-05-13 23:29:14.247502 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8f56c737-ae06-5042-be62-d4d7430a3913'}})  2025-05-13 23:29:14.251022 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b9ab4848-02bd-5b2a-a6cc-ded55503b6b3'}})  2025-05-13 23:29:14.255604 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:29:14.256223 | orchestrator | 2025-05-13 23:29:14.257197 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-05-13 23:29:14.258899 | orchestrator | Tuesday 13 May 2025 23:29:14 +0000 (0:00:00.156) 0:00:25.266 *********** 2025-05-13 23:29:14.415129 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8f56c737-ae06-5042-be62-d4d7430a3913'}})  2025-05-13 23:29:14.417328 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b9ab4848-02bd-5b2a-a6cc-ded55503b6b3'}})  2025-05-13 23:29:14.417640 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:29:14.417891 | orchestrator | 2025-05-13 23:29:14.418306 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-05-13 23:29:14.418656 | orchestrator | Tuesday 13 May 2025 23:29:14 +0000 (0:00:00.176) 0:00:25.443 *********** 2025-05-13 23:29:14.568882 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:29:14.570517 | orchestrator | 2025-05-13 23:29:14.570565 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-05-13 23:29:14.571501 | orchestrator | Tuesday 13 May 2025 23:29:14 +0000 (0:00:00.154) 0:00:25.598 *********** 2025-05-13 23:29:14.721116 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:29:14.722492 | orchestrator | 2025-05-13 23:29:14.725167 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-05-13 23:29:14.725212 | orchestrator | Tuesday 13 May 2025 23:29:14 +0000 (0:00:00.151) 0:00:25.749 *********** 2025-05-13 23:29:14.848605 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:29:14.849709 | orchestrator | 2025-05-13 23:29:14.850753 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-05-13 23:29:14.852191 | orchestrator | Tuesday 13 May 2025 23:29:14 +0000 (0:00:00.128) 0:00:25.878 *********** 2025-05-13 23:29:15.204572 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:29:15.205635 | orchestrator | 2025-05-13 23:29:15.207298 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-05-13 23:29:15.208318 | orchestrator | Tuesday 13 May 2025 23:29:15 +0000 (0:00:00.356) 0:00:26.235 *********** 2025-05-13 23:29:15.343416 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:29:15.344334 | orchestrator | 2025-05-13 23:29:15.345079 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-05-13 23:29:15.346653 | orchestrator | Tuesday 13 May 2025 23:29:15 +0000 (0:00:00.137) 0:00:26.373 *********** 2025-05-13 23:29:15.520777 | orchestrator | ok: [testbed-node-4] => { 2025-05-13 23:29:15.521953 | orchestrator |  "ceph_osd_devices": { 2025-05-13 23:29:15.523076 | orchestrator |  "sdb": { 2025-05-13 23:29:15.525236 | orchestrator |  "osd_lvm_uuid": "8f56c737-ae06-5042-be62-d4d7430a3913" 2025-05-13 23:29:15.525292 | orchestrator |  }, 2025-05-13 23:29:15.526160 | orchestrator |  "sdc": { 2025-05-13 23:29:15.526623 | orchestrator |  "osd_lvm_uuid": "b9ab4848-02bd-5b2a-a6cc-ded55503b6b3" 2025-05-13 23:29:15.527445 | orchestrator |  } 2025-05-13 23:29:15.527507 | orchestrator |  } 2025-05-13 23:29:15.527617 | orchestrator | } 2025-05-13 23:29:15.528168 | orchestrator | 2025-05-13 23:29:15.528518 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-05-13 23:29:15.529025 | orchestrator | Tuesday 13 May 2025 23:29:15 +0000 (0:00:00.177) 0:00:26.550 *********** 2025-05-13 23:29:15.649530 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:29:15.649904 | orchestrator | 2025-05-13 23:29:15.651242 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-05-13 23:29:15.652019 | orchestrator | Tuesday 13 May 2025 23:29:15 +0000 (0:00:00.128) 0:00:26.679 *********** 2025-05-13 23:29:15.805216 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:29:15.806475 | orchestrator | 2025-05-13 23:29:15.807032 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-05-13 23:29:15.809514 | orchestrator | Tuesday 13 May 2025 23:29:15 +0000 (0:00:00.154) 0:00:26.834 *********** 2025-05-13 23:29:15.952058 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:29:15.952160 | orchestrator | 2025-05-13 23:29:15.952175 | orchestrator | TASK [Print configuration data] ************************************************ 2025-05-13 23:29:15.952186 | orchestrator | Tuesday 13 May 2025 23:29:15 +0000 (0:00:00.146) 0:00:26.980 *********** 2025-05-13 23:29:16.188638 | orchestrator | changed: [testbed-node-4] => { 2025-05-13 23:29:16.191607 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-05-13 23:29:16.192724 | orchestrator |  "ceph_osd_devices": { 2025-05-13 23:29:16.193694 | orchestrator |  "sdb": { 2025-05-13 23:29:16.195187 | orchestrator |  "osd_lvm_uuid": "8f56c737-ae06-5042-be62-d4d7430a3913" 2025-05-13 23:29:16.196193 | orchestrator |  }, 2025-05-13 23:29:16.197226 | orchestrator |  "sdc": { 2025-05-13 23:29:16.198291 | orchestrator |  "osd_lvm_uuid": "b9ab4848-02bd-5b2a-a6cc-ded55503b6b3" 2025-05-13 23:29:16.198973 | orchestrator |  } 2025-05-13 23:29:16.200592 | orchestrator |  }, 2025-05-13 23:29:16.202197 | orchestrator |  "lvm_volumes": [ 2025-05-13 23:29:16.202749 | orchestrator |  { 2025-05-13 23:29:16.206640 | orchestrator |  "data": "osd-block-8f56c737-ae06-5042-be62-d4d7430a3913", 2025-05-13 23:29:16.209171 | orchestrator |  "data_vg": "ceph-8f56c737-ae06-5042-be62-d4d7430a3913" 2025-05-13 23:29:16.211229 | orchestrator |  }, 2025-05-13 23:29:16.212723 | orchestrator |  { 2025-05-13 23:29:16.214298 | orchestrator |  "data": "osd-block-b9ab4848-02bd-5b2a-a6cc-ded55503b6b3", 2025-05-13 23:29:16.216604 | orchestrator |  "data_vg": "ceph-b9ab4848-02bd-5b2a-a6cc-ded55503b6b3" 2025-05-13 23:29:16.218209 | orchestrator |  } 2025-05-13 23:29:16.220425 | orchestrator |  ] 2025-05-13 23:29:16.222299 | orchestrator |  } 2025-05-13 23:29:16.223357 | orchestrator | } 2025-05-13 23:29:16.225023 | orchestrator | 2025-05-13 23:29:16.226567 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-05-13 23:29:16.227791 | orchestrator | Tuesday 13 May 2025 23:29:16 +0000 (0:00:00.239) 0:00:27.220 *********** 2025-05-13 23:29:17.405666 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-05-13 23:29:17.408332 | orchestrator | 2025-05-13 23:29:17.408458 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-05-13 23:29:17.408907 | orchestrator | 2025-05-13 23:29:17.410759 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-13 23:29:17.412364 | orchestrator | Tuesday 13 May 2025 23:29:17 +0000 (0:00:01.216) 0:00:28.436 *********** 2025-05-13 23:29:17.896284 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-05-13 23:29:17.896404 | orchestrator | 2025-05-13 23:29:17.898201 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-13 23:29:17.898527 | orchestrator | Tuesday 13 May 2025 23:29:17 +0000 (0:00:00.487) 0:00:28.924 *********** 2025-05-13 23:29:18.634272 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:29:18.634580 | orchestrator | 2025-05-13 23:29:18.636870 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:29:18.637588 | orchestrator | Tuesday 13 May 2025 23:29:18 +0000 (0:00:00.739) 0:00:29.664 *********** 2025-05-13 23:29:19.049141 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-05-13 23:29:19.049297 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-05-13 23:29:19.053554 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-05-13 23:29:19.053611 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-05-13 23:29:19.054564 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-05-13 23:29:19.055832 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-05-13 23:29:19.056484 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-05-13 23:29:19.057091 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-05-13 23:29:19.057472 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-05-13 23:29:19.058581 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-05-13 23:29:19.059204 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-05-13 23:29:19.059479 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-05-13 23:29:19.060255 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-05-13 23:29:19.061475 | orchestrator | 2025-05-13 23:29:19.064209 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:29:19.064656 | orchestrator | Tuesday 13 May 2025 23:29:19 +0000 (0:00:00.414) 0:00:30.079 *********** 2025-05-13 23:29:19.264739 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:29:19.265114 | orchestrator | 2025-05-13 23:29:19.267419 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:29:19.267662 | orchestrator | Tuesday 13 May 2025 23:29:19 +0000 (0:00:00.217) 0:00:30.296 *********** 2025-05-13 23:29:19.463614 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:29:19.463777 | orchestrator | 2025-05-13 23:29:19.464010 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:29:19.464309 | orchestrator | Tuesday 13 May 2025 23:29:19 +0000 (0:00:00.199) 0:00:30.495 *********** 2025-05-13 23:29:19.648312 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:29:19.651938 | orchestrator | 2025-05-13 23:29:19.655531 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:29:19.655784 | orchestrator | Tuesday 13 May 2025 23:29:19 +0000 (0:00:00.182) 0:00:30.678 *********** 2025-05-13 23:29:19.856255 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:29:19.856462 | orchestrator | 2025-05-13 23:29:19.856514 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:29:19.856931 | orchestrator | Tuesday 13 May 2025 23:29:19 +0000 (0:00:00.204) 0:00:30.883 *********** 2025-05-13 23:29:20.043352 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:29:20.043438 | orchestrator | 2025-05-13 23:29:20.043497 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:29:20.044716 | orchestrator | Tuesday 13 May 2025 23:29:20 +0000 (0:00:00.191) 0:00:31.075 *********** 2025-05-13 23:29:20.240796 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:29:20.240960 | orchestrator | 2025-05-13 23:29:20.241131 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:29:20.241813 | orchestrator | Tuesday 13 May 2025 23:29:20 +0000 (0:00:00.195) 0:00:31.270 *********** 2025-05-13 23:29:20.441761 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:29:20.441994 | orchestrator | 2025-05-13 23:29:20.443625 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:29:20.443703 | orchestrator | Tuesday 13 May 2025 23:29:20 +0000 (0:00:00.198) 0:00:31.469 *********** 2025-05-13 23:29:20.628181 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:29:20.631179 | orchestrator | 2025-05-13 23:29:20.631553 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:29:20.631786 | orchestrator | Tuesday 13 May 2025 23:29:20 +0000 (0:00:00.191) 0:00:31.660 *********** 2025-05-13 23:29:21.190282 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_b255196f-0cab-4746-bd7d-248a31197f78) 2025-05-13 23:29:21.190974 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_b255196f-0cab-4746-bd7d-248a31197f78) 2025-05-13 23:29:21.191145 | orchestrator | 2025-05-13 23:29:21.191472 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:29:21.191796 | orchestrator | Tuesday 13 May 2025 23:29:21 +0000 (0:00:00.557) 0:00:32.217 *********** 2025-05-13 23:29:21.910419 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_61dae38b-1d40-412d-9df6-8d9734e6ced8) 2025-05-13 23:29:21.910759 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_61dae38b-1d40-412d-9df6-8d9734e6ced8) 2025-05-13 23:29:21.911315 | orchestrator | 2025-05-13 23:29:21.912515 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:29:21.914625 | orchestrator | Tuesday 13 May 2025 23:29:21 +0000 (0:00:00.724) 0:00:32.941 *********** 2025-05-13 23:29:22.340287 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_0aeac9b9-4df2-4d9e-975e-68588115061e) 2025-05-13 23:29:22.340454 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_0aeac9b9-4df2-4d9e-975e-68588115061e) 2025-05-13 23:29:22.340698 | orchestrator | 2025-05-13 23:29:22.341248 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:29:22.342208 | orchestrator | Tuesday 13 May 2025 23:29:22 +0000 (0:00:00.426) 0:00:33.368 *********** 2025-05-13 23:29:22.716671 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_55ed4948-9fe5-49ab-9e57-6f6f508ce8e3) 2025-05-13 23:29:22.717779 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_55ed4948-9fe5-49ab-9e57-6f6f508ce8e3) 2025-05-13 23:29:22.718672 | orchestrator | 2025-05-13 23:29:22.719481 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:29:22.720466 | orchestrator | Tuesday 13 May 2025 23:29:22 +0000 (0:00:00.379) 0:00:33.747 *********** 2025-05-13 23:29:23.021124 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-13 23:29:23.021586 | orchestrator | 2025-05-13 23:29:23.022530 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:29:23.024055 | orchestrator | Tuesday 13 May 2025 23:29:23 +0000 (0:00:00.302) 0:00:34.050 *********** 2025-05-13 23:29:23.378524 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-05-13 23:29:23.379202 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-05-13 23:29:23.381973 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-05-13 23:29:23.383446 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-05-13 23:29:23.383679 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-05-13 23:29:23.384144 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-05-13 23:29:23.384583 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-05-13 23:29:23.384993 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-05-13 23:29:23.385681 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-05-13 23:29:23.385969 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-05-13 23:29:23.386452 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-05-13 23:29:23.387202 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-05-13 23:29:23.387592 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-05-13 23:29:23.388233 | orchestrator | 2025-05-13 23:29:23.388674 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:29:23.389239 | orchestrator | Tuesday 13 May 2025 23:29:23 +0000 (0:00:00.358) 0:00:34.408 *********** 2025-05-13 23:29:23.562662 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:29:23.562990 | orchestrator | 2025-05-13 23:29:23.563981 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:29:23.564576 | orchestrator | Tuesday 13 May 2025 23:29:23 +0000 (0:00:00.182) 0:00:34.591 *********** 2025-05-13 23:29:23.731261 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:29:23.732166 | orchestrator | 2025-05-13 23:29:23.733335 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:29:23.734151 | orchestrator | Tuesday 13 May 2025 23:29:23 +0000 (0:00:00.170) 0:00:34.761 *********** 2025-05-13 23:29:23.903371 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:29:23.903450 | orchestrator | 2025-05-13 23:29:23.903459 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:29:23.903808 | orchestrator | Tuesday 13 May 2025 23:29:23 +0000 (0:00:00.169) 0:00:34.931 *********** 2025-05-13 23:29:24.089968 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:29:24.091007 | orchestrator | 2025-05-13 23:29:24.091593 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:29:24.092295 | orchestrator | Tuesday 13 May 2025 23:29:24 +0000 (0:00:00.188) 0:00:35.119 *********** 2025-05-13 23:29:24.276456 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:29:24.277870 | orchestrator | 2025-05-13 23:29:24.279188 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:29:24.281891 | orchestrator | Tuesday 13 May 2025 23:29:24 +0000 (0:00:00.188) 0:00:35.308 *********** 2025-05-13 23:29:24.728678 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:29:24.729984 | orchestrator | 2025-05-13 23:29:24.731232 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:29:24.732166 | orchestrator | Tuesday 13 May 2025 23:29:24 +0000 (0:00:00.451) 0:00:35.759 *********** 2025-05-13 23:29:24.901456 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:29:24.902190 | orchestrator | 2025-05-13 23:29:24.903389 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:29:24.903993 | orchestrator | Tuesday 13 May 2025 23:29:24 +0000 (0:00:00.172) 0:00:35.931 *********** 2025-05-13 23:29:25.089732 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:29:25.091198 | orchestrator | 2025-05-13 23:29:25.092006 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:29:25.093544 | orchestrator | Tuesday 13 May 2025 23:29:25 +0000 (0:00:00.189) 0:00:36.121 *********** 2025-05-13 23:29:25.743596 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-05-13 23:29:25.745594 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-05-13 23:29:25.747513 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-05-13 23:29:25.748332 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-05-13 23:29:25.749152 | orchestrator | 2025-05-13 23:29:25.749731 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:29:25.751472 | orchestrator | Tuesday 13 May 2025 23:29:25 +0000 (0:00:00.651) 0:00:36.772 *********** 2025-05-13 23:29:25.935272 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:29:25.936556 | orchestrator | 2025-05-13 23:29:25.937925 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:29:25.940869 | orchestrator | Tuesday 13 May 2025 23:29:25 +0000 (0:00:00.193) 0:00:36.966 *********** 2025-05-13 23:29:26.117893 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:29:26.121260 | orchestrator | 2025-05-13 23:29:26.122341 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:29:26.122882 | orchestrator | Tuesday 13 May 2025 23:29:26 +0000 (0:00:00.182) 0:00:37.148 *********** 2025-05-13 23:29:26.354010 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:29:26.355116 | orchestrator | 2025-05-13 23:29:26.356251 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:29:26.357472 | orchestrator | Tuesday 13 May 2025 23:29:26 +0000 (0:00:00.235) 0:00:37.384 *********** 2025-05-13 23:29:26.550386 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:29:26.553889 | orchestrator | 2025-05-13 23:29:26.555145 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-05-13 23:29:26.555442 | orchestrator | Tuesday 13 May 2025 23:29:26 +0000 (0:00:00.194) 0:00:37.578 *********** 2025-05-13 23:29:26.713663 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-05-13 23:29:26.714725 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-05-13 23:29:26.716022 | orchestrator | 2025-05-13 23:29:26.719925 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-05-13 23:29:26.720958 | orchestrator | Tuesday 13 May 2025 23:29:26 +0000 (0:00:00.165) 0:00:37.744 *********** 2025-05-13 23:29:26.856437 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:29:26.862738 | orchestrator | 2025-05-13 23:29:26.863376 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-05-13 23:29:26.865883 | orchestrator | Tuesday 13 May 2025 23:29:26 +0000 (0:00:00.139) 0:00:37.883 *********** 2025-05-13 23:29:26.995857 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:29:26.997352 | orchestrator | 2025-05-13 23:29:27.002467 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-05-13 23:29:27.003353 | orchestrator | Tuesday 13 May 2025 23:29:26 +0000 (0:00:00.142) 0:00:38.026 *********** 2025-05-13 23:29:27.134237 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:29:27.135185 | orchestrator | 2025-05-13 23:29:27.135457 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-05-13 23:29:27.136459 | orchestrator | Tuesday 13 May 2025 23:29:27 +0000 (0:00:00.137) 0:00:38.164 *********** 2025-05-13 23:29:27.527227 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:29:27.529236 | orchestrator | 2025-05-13 23:29:27.533547 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-05-13 23:29:27.534725 | orchestrator | Tuesday 13 May 2025 23:29:27 +0000 (0:00:00.393) 0:00:38.557 *********** 2025-05-13 23:29:27.717922 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '53cfcf66-6862-5829-a71b-dc902cfbd9df'}}) 2025-05-13 23:29:27.719312 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd153f4c4-5597-54b4-b460-41e490b92c19'}}) 2025-05-13 23:29:27.720941 | orchestrator | 2025-05-13 23:29:27.724193 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-05-13 23:29:27.727341 | orchestrator | Tuesday 13 May 2025 23:29:27 +0000 (0:00:00.191) 0:00:38.749 *********** 2025-05-13 23:29:27.868630 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '53cfcf66-6862-5829-a71b-dc902cfbd9df'}})  2025-05-13 23:29:27.869502 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd153f4c4-5597-54b4-b460-41e490b92c19'}})  2025-05-13 23:29:27.874663 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:29:27.875434 | orchestrator | 2025-05-13 23:29:27.879360 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-05-13 23:29:27.879617 | orchestrator | Tuesday 13 May 2025 23:29:27 +0000 (0:00:00.149) 0:00:38.899 *********** 2025-05-13 23:29:28.026936 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '53cfcf66-6862-5829-a71b-dc902cfbd9df'}})  2025-05-13 23:29:28.028526 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd153f4c4-5597-54b4-b460-41e490b92c19'}})  2025-05-13 23:29:28.030164 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:29:28.031709 | orchestrator | 2025-05-13 23:29:28.032828 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-05-13 23:29:28.037205 | orchestrator | Tuesday 13 May 2025 23:29:28 +0000 (0:00:00.156) 0:00:39.055 *********** 2025-05-13 23:29:28.186559 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '53cfcf66-6862-5829-a71b-dc902cfbd9df'}})  2025-05-13 23:29:28.186686 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd153f4c4-5597-54b4-b460-41e490b92c19'}})  2025-05-13 23:29:28.187820 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:29:28.188944 | orchestrator | 2025-05-13 23:29:28.189990 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-05-13 23:29:28.191673 | orchestrator | Tuesday 13 May 2025 23:29:28 +0000 (0:00:00.159) 0:00:39.214 *********** 2025-05-13 23:29:28.364982 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:29:28.365065 | orchestrator | 2025-05-13 23:29:28.366442 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-05-13 23:29:28.367300 | orchestrator | Tuesday 13 May 2025 23:29:28 +0000 (0:00:00.180) 0:00:39.395 *********** 2025-05-13 23:29:28.498619 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:29:28.498882 | orchestrator | 2025-05-13 23:29:28.500280 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-05-13 23:29:28.501003 | orchestrator | Tuesday 13 May 2025 23:29:28 +0000 (0:00:00.133) 0:00:39.528 *********** 2025-05-13 23:29:28.633342 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:29:28.633467 | orchestrator | 2025-05-13 23:29:28.634397 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-05-13 23:29:28.635350 | orchestrator | Tuesday 13 May 2025 23:29:28 +0000 (0:00:00.134) 0:00:39.663 *********** 2025-05-13 23:29:28.778318 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:29:28.778533 | orchestrator | 2025-05-13 23:29:28.779219 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-05-13 23:29:28.780675 | orchestrator | Tuesday 13 May 2025 23:29:28 +0000 (0:00:00.143) 0:00:39.807 *********** 2025-05-13 23:29:28.930682 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:29:28.930789 | orchestrator | 2025-05-13 23:29:28.932237 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-05-13 23:29:28.933408 | orchestrator | Tuesday 13 May 2025 23:29:28 +0000 (0:00:00.149) 0:00:39.956 *********** 2025-05-13 23:29:29.078402 | orchestrator | ok: [testbed-node-5] => { 2025-05-13 23:29:29.079319 | orchestrator |  "ceph_osd_devices": { 2025-05-13 23:29:29.080378 | orchestrator |  "sdb": { 2025-05-13 23:29:29.081064 | orchestrator |  "osd_lvm_uuid": "53cfcf66-6862-5829-a71b-dc902cfbd9df" 2025-05-13 23:29:29.084167 | orchestrator |  }, 2025-05-13 23:29:29.086385 | orchestrator |  "sdc": { 2025-05-13 23:29:29.090442 | orchestrator |  "osd_lvm_uuid": "d153f4c4-5597-54b4-b460-41e490b92c19" 2025-05-13 23:29:29.090494 | orchestrator |  } 2025-05-13 23:29:29.091937 | orchestrator |  } 2025-05-13 23:29:29.093287 | orchestrator | } 2025-05-13 23:29:29.094214 | orchestrator | 2025-05-13 23:29:29.095514 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-05-13 23:29:29.096308 | orchestrator | Tuesday 13 May 2025 23:29:29 +0000 (0:00:00.151) 0:00:40.108 *********** 2025-05-13 23:29:29.210448 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:29:29.213380 | orchestrator | 2025-05-13 23:29:29.215447 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-05-13 23:29:29.217140 | orchestrator | Tuesday 13 May 2025 23:29:29 +0000 (0:00:00.129) 0:00:40.238 *********** 2025-05-13 23:29:29.557957 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:29:29.562625 | orchestrator | 2025-05-13 23:29:29.562703 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-05-13 23:29:29.562718 | orchestrator | Tuesday 13 May 2025 23:29:29 +0000 (0:00:00.348) 0:00:40.586 *********** 2025-05-13 23:29:29.713047 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:29:29.713165 | orchestrator | 2025-05-13 23:29:29.713179 | orchestrator | TASK [Print configuration data] ************************************************ 2025-05-13 23:29:29.713189 | orchestrator | Tuesday 13 May 2025 23:29:29 +0000 (0:00:00.155) 0:00:40.742 *********** 2025-05-13 23:29:29.963046 | orchestrator | changed: [testbed-node-5] => { 2025-05-13 23:29:29.963226 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-05-13 23:29:29.963868 | orchestrator |  "ceph_osd_devices": { 2025-05-13 23:29:29.965567 | orchestrator |  "sdb": { 2025-05-13 23:29:29.966352 | orchestrator |  "osd_lvm_uuid": "53cfcf66-6862-5829-a71b-dc902cfbd9df" 2025-05-13 23:29:29.967087 | orchestrator |  }, 2025-05-13 23:29:29.967423 | orchestrator |  "sdc": { 2025-05-13 23:29:29.968287 | orchestrator |  "osd_lvm_uuid": "d153f4c4-5597-54b4-b460-41e490b92c19" 2025-05-13 23:29:29.968615 | orchestrator |  } 2025-05-13 23:29:29.971180 | orchestrator |  }, 2025-05-13 23:29:29.971217 | orchestrator |  "lvm_volumes": [ 2025-05-13 23:29:29.971232 | orchestrator |  { 2025-05-13 23:29:29.971243 | orchestrator |  "data": "osd-block-53cfcf66-6862-5829-a71b-dc902cfbd9df", 2025-05-13 23:29:29.971255 | orchestrator |  "data_vg": "ceph-53cfcf66-6862-5829-a71b-dc902cfbd9df" 2025-05-13 23:29:29.971267 | orchestrator |  }, 2025-05-13 23:29:29.971278 | orchestrator |  { 2025-05-13 23:29:29.971296 | orchestrator |  "data": "osd-block-d153f4c4-5597-54b4-b460-41e490b92c19", 2025-05-13 23:29:29.971398 | orchestrator |  "data_vg": "ceph-d153f4c4-5597-54b4-b460-41e490b92c19" 2025-05-13 23:29:29.971856 | orchestrator |  } 2025-05-13 23:29:29.972341 | orchestrator |  ] 2025-05-13 23:29:29.972743 | orchestrator |  } 2025-05-13 23:29:29.973183 | orchestrator | } 2025-05-13 23:29:29.973482 | orchestrator | 2025-05-13 23:29:29.973990 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-05-13 23:29:29.974411 | orchestrator | Tuesday 13 May 2025 23:29:29 +0000 (0:00:00.251) 0:00:40.993 *********** 2025-05-13 23:29:30.898757 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-05-13 23:29:30.898979 | orchestrator | 2025-05-13 23:29:30.901597 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 23:29:30.903209 | orchestrator | 2025-05-13 23:29:30 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-13 23:29:30.903239 | orchestrator | 2025-05-13 23:29:30 | INFO  | Please wait and do not abort execution. 2025-05-13 23:29:30.904631 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-13 23:29:30.906517 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-13 23:29:30.908460 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-13 23:29:30.909697 | orchestrator | 2025-05-13 23:29:30.910633 | orchestrator | 2025-05-13 23:29:30.911435 | orchestrator | 2025-05-13 23:29:30.912759 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 23:29:30.913389 | orchestrator | Tuesday 13 May 2025 23:29:30 +0000 (0:00:00.935) 0:00:41.929 *********** 2025-05-13 23:29:30.915800 | orchestrator | =============================================================================== 2025-05-13 23:29:30.917509 | orchestrator | Write configuration file ------------------------------------------------ 4.18s 2025-05-13 23:29:30.917979 | orchestrator | Add known links to the list of available block devices ------------------ 1.21s 2025-05-13 23:29:30.919149 | orchestrator | Get initial list of available block devices ----------------------------- 1.20s 2025-05-13 23:29:30.919617 | orchestrator | Add known partitions to the list of available block devices ------------- 1.16s 2025-05-13 23:29:30.920734 | orchestrator | Add known partitions to the list of available block devices ------------- 1.10s 2025-05-13 23:29:30.921356 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.00s 2025-05-13 23:29:30.922340 | orchestrator | Add known links to the list of available block devices ------------------ 0.79s 2025-05-13 23:29:30.923092 | orchestrator | Add known links to the list of available block devices ------------------ 0.75s 2025-05-13 23:29:30.924455 | orchestrator | Add known links to the list of available block devices ------------------ 0.72s 2025-05-13 23:29:30.924964 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.71s 2025-05-13 23:29:30.927179 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.71s 2025-05-13 23:29:30.928331 | orchestrator | Print configuration data ------------------------------------------------ 0.70s 2025-05-13 23:29:30.928652 | orchestrator | Add known links to the list of available block devices ------------------ 0.66s 2025-05-13 23:29:30.929563 | orchestrator | Add known partitions to the list of available block devices ------------- 0.65s 2025-05-13 23:29:30.930666 | orchestrator | Print DB devices -------------------------------------------------------- 0.64s 2025-05-13 23:29:30.931060 | orchestrator | Add known links to the list of available block devices ------------------ 0.63s 2025-05-13 23:29:30.931825 | orchestrator | Set WAL devices config data --------------------------------------------- 0.61s 2025-05-13 23:29:30.932454 | orchestrator | Add known partitions to the list of available block devices ------------- 0.61s 2025-05-13 23:29:30.932775 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.60s 2025-05-13 23:29:30.934066 | orchestrator | Generate lvm_volumes structure (block only) ----------------------------- 0.60s 2025-05-13 23:29:43.312777 | orchestrator | 2025-05-13 23:29:43 | INFO  | Task edcf7841-18bd-4019-89d1-842f41757178 (sync inventory) is running in background. Output coming soon. 2025-05-13 23:30:28.638530 | orchestrator | 2025-05-13 23:30:15 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-05-13 23:30:28.638651 | orchestrator | 2025-05-13 23:30:15 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-05-13 23:30:28.638666 | orchestrator | 2025-05-13 23:30:15 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-05-13 23:30:28.638678 | orchestrator | 2025-05-13 23:30:16 | INFO  | Handling group overwrites in 99-overwrite 2025-05-13 23:30:28.638691 | orchestrator | 2025-05-13 23:30:16 | INFO  | Removing group frr:children from 60-generic 2025-05-13 23:30:28.638702 | orchestrator | 2025-05-13 23:30:16 | INFO  | Removing group storage:children from 50-kolla 2025-05-13 23:30:28.638712 | orchestrator | 2025-05-13 23:30:16 | INFO  | Removing group netbird:children from 50-infrastruture 2025-05-13 23:30:28.638723 | orchestrator | 2025-05-13 23:30:16 | INFO  | Removing group ceph-mds from 50-ceph 2025-05-13 23:30:28.638734 | orchestrator | 2025-05-13 23:30:16 | INFO  | Removing group ceph-rgw from 50-ceph 2025-05-13 23:30:28.638745 | orchestrator | 2025-05-13 23:30:16 | INFO  | Handling group overwrites in 20-roles 2025-05-13 23:30:28.638755 | orchestrator | 2025-05-13 23:30:16 | INFO  | Removing group k3s_node from 50-infrastruture 2025-05-13 23:30:28.638766 | orchestrator | 2025-05-13 23:30:17 | INFO  | File 20-netbox not found in /inventory.pre/ 2025-05-13 23:30:28.638801 | orchestrator | 2025-05-13 23:30:28 | INFO  | Writing /inventory/clustershell/ansible.yaml with clustershell groups 2025-05-13 23:30:30.510313 | orchestrator | 2025-05-13 23:30:30 | INFO  | Task 1b06dd81-9ee7-43a1-9db0-405b43846d25 (ceph-create-lvm-devices) was prepared for execution. 2025-05-13 23:30:30.510605 | orchestrator | 2025-05-13 23:30:30 | INFO  | It takes a moment until task 1b06dd81-9ee7-43a1-9db0-405b43846d25 (ceph-create-lvm-devices) has been started and output is visible here. 2025-05-13 23:30:34.703555 | orchestrator | 2025-05-13 23:30:34.703791 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-05-13 23:30:34.705737 | orchestrator | 2025-05-13 23:30:34.705922 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-13 23:30:34.706800 | orchestrator | Tuesday 13 May 2025 23:30:34 +0000 (0:00:00.329) 0:00:00.329 *********** 2025-05-13 23:30:34.921100 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-13 23:30:34.921231 | orchestrator | 2025-05-13 23:30:34.921571 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-13 23:30:34.922137 | orchestrator | Tuesday 13 May 2025 23:30:34 +0000 (0:00:00.220) 0:00:00.549 *********** 2025-05-13 23:30:35.150624 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:30:35.150776 | orchestrator | 2025-05-13 23:30:35.151611 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:30:35.151914 | orchestrator | Tuesday 13 May 2025 23:30:35 +0000 (0:00:00.230) 0:00:00.780 *********** 2025-05-13 23:30:35.576697 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-05-13 23:30:35.578147 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-05-13 23:30:35.581169 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-05-13 23:30:35.581197 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-05-13 23:30:35.581202 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-05-13 23:30:35.581674 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-05-13 23:30:35.582193 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-05-13 23:30:35.582944 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-05-13 23:30:35.583794 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-05-13 23:30:35.584121 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-05-13 23:30:35.585051 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-05-13 23:30:35.585703 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-05-13 23:30:35.585991 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-05-13 23:30:35.586694 | orchestrator | 2025-05-13 23:30:35.587293 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:30:35.587599 | orchestrator | Tuesday 13 May 2025 23:30:35 +0000 (0:00:00.424) 0:00:01.205 *********** 2025-05-13 23:30:36.036647 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:30:36.036888 | orchestrator | 2025-05-13 23:30:36.038446 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:30:36.039738 | orchestrator | Tuesday 13 May 2025 23:30:36 +0000 (0:00:00.458) 0:00:01.664 *********** 2025-05-13 23:30:36.258131 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:30:36.259004 | orchestrator | 2025-05-13 23:30:36.259567 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:30:36.260309 | orchestrator | Tuesday 13 May 2025 23:30:36 +0000 (0:00:00.223) 0:00:01.887 *********** 2025-05-13 23:30:36.464131 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:30:36.465159 | orchestrator | 2025-05-13 23:30:36.467162 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:30:36.468245 | orchestrator | Tuesday 13 May 2025 23:30:36 +0000 (0:00:00.205) 0:00:02.093 *********** 2025-05-13 23:30:36.648427 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:30:36.649224 | orchestrator | 2025-05-13 23:30:36.650290 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:30:36.651898 | orchestrator | Tuesday 13 May 2025 23:30:36 +0000 (0:00:00.184) 0:00:02.277 *********** 2025-05-13 23:30:36.857261 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:30:36.858216 | orchestrator | 2025-05-13 23:30:36.859541 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:30:36.859563 | orchestrator | Tuesday 13 May 2025 23:30:36 +0000 (0:00:00.207) 0:00:02.484 *********** 2025-05-13 23:30:37.076310 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:30:37.076467 | orchestrator | 2025-05-13 23:30:37.078263 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:30:37.079981 | orchestrator | Tuesday 13 May 2025 23:30:37 +0000 (0:00:00.220) 0:00:02.705 *********** 2025-05-13 23:30:37.263797 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:30:37.263970 | orchestrator | 2025-05-13 23:30:37.264937 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:30:37.266419 | orchestrator | Tuesday 13 May 2025 23:30:37 +0000 (0:00:00.187) 0:00:02.892 *********** 2025-05-13 23:30:37.472830 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:30:37.473625 | orchestrator | 2025-05-13 23:30:37.475088 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:30:37.477142 | orchestrator | Tuesday 13 May 2025 23:30:37 +0000 (0:00:00.208) 0:00:03.101 *********** 2025-05-13 23:30:37.890718 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_c6453a8e-6632-42ad-a179-435c946212ec) 2025-05-13 23:30:37.891323 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_c6453a8e-6632-42ad-a179-435c946212ec) 2025-05-13 23:30:37.892633 | orchestrator | 2025-05-13 23:30:37.892721 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:30:37.893317 | orchestrator | Tuesday 13 May 2025 23:30:37 +0000 (0:00:00.417) 0:00:03.518 *********** 2025-05-13 23:30:38.321229 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_2123f305-4e6b-4736-99ab-18aaa07aaf45) 2025-05-13 23:30:38.322178 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_2123f305-4e6b-4736-99ab-18aaa07aaf45) 2025-05-13 23:30:38.323315 | orchestrator | 2025-05-13 23:30:38.324874 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:30:38.325040 | orchestrator | Tuesday 13 May 2025 23:30:38 +0000 (0:00:00.430) 0:00:03.949 *********** 2025-05-13 23:30:38.959148 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_46243ec1-9f30-4dd7-b280-49f134625000) 2025-05-13 23:30:38.959948 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_46243ec1-9f30-4dd7-b280-49f134625000) 2025-05-13 23:30:38.961170 | orchestrator | 2025-05-13 23:30:38.961205 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:30:38.961946 | orchestrator | Tuesday 13 May 2025 23:30:38 +0000 (0:00:00.637) 0:00:04.586 *********** 2025-05-13 23:30:39.819215 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_213ab59a-cb73-4407-9705-0b2ca8256438) 2025-05-13 23:30:39.819389 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_213ab59a-cb73-4407-9705-0b2ca8256438) 2025-05-13 23:30:39.819408 | orchestrator | 2025-05-13 23:30:39.819855 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:30:39.820229 | orchestrator | Tuesday 13 May 2025 23:30:39 +0000 (0:00:00.860) 0:00:05.446 *********** 2025-05-13 23:30:40.152925 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-13 23:30:40.154053 | orchestrator | 2025-05-13 23:30:40.155225 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:30:40.155946 | orchestrator | Tuesday 13 May 2025 23:30:40 +0000 (0:00:00.335) 0:00:05.782 *********** 2025-05-13 23:30:40.564585 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-05-13 23:30:40.565261 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-05-13 23:30:40.566131 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-05-13 23:30:40.569445 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-05-13 23:30:40.569754 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-05-13 23:30:40.570678 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-05-13 23:30:40.571327 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-05-13 23:30:40.571974 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-05-13 23:30:40.572864 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-05-13 23:30:40.573866 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-05-13 23:30:40.574454 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-05-13 23:30:40.575008 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-05-13 23:30:40.575692 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-05-13 23:30:40.576152 | orchestrator | 2025-05-13 23:30:40.576748 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:30:40.577282 | orchestrator | Tuesday 13 May 2025 23:30:40 +0000 (0:00:00.409) 0:00:06.192 *********** 2025-05-13 23:30:40.775731 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:30:40.778684 | orchestrator | 2025-05-13 23:30:40.778719 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:30:40.779040 | orchestrator | Tuesday 13 May 2025 23:30:40 +0000 (0:00:00.211) 0:00:06.403 *********** 2025-05-13 23:30:40.969940 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:30:40.971732 | orchestrator | 2025-05-13 23:30:40.972153 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:30:40.973292 | orchestrator | Tuesday 13 May 2025 23:30:40 +0000 (0:00:00.193) 0:00:06.597 *********** 2025-05-13 23:30:41.166852 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:30:41.169078 | orchestrator | 2025-05-13 23:30:41.170305 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:30:41.170341 | orchestrator | Tuesday 13 May 2025 23:30:41 +0000 (0:00:00.198) 0:00:06.795 *********** 2025-05-13 23:30:41.367647 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:30:41.368033 | orchestrator | 2025-05-13 23:30:41.369038 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:30:41.369627 | orchestrator | Tuesday 13 May 2025 23:30:41 +0000 (0:00:00.201) 0:00:06.996 *********** 2025-05-13 23:30:41.574001 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:30:41.574858 | orchestrator | 2025-05-13 23:30:41.575739 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:30:41.577167 | orchestrator | Tuesday 13 May 2025 23:30:41 +0000 (0:00:00.206) 0:00:07.202 *********** 2025-05-13 23:30:41.775337 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:30:41.775523 | orchestrator | 2025-05-13 23:30:41.776950 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:30:41.777831 | orchestrator | Tuesday 13 May 2025 23:30:41 +0000 (0:00:00.201) 0:00:07.404 *********** 2025-05-13 23:30:41.965459 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:30:41.965684 | orchestrator | 2025-05-13 23:30:41.967920 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:30:41.969156 | orchestrator | Tuesday 13 May 2025 23:30:41 +0000 (0:00:00.188) 0:00:07.593 *********** 2025-05-13 23:30:42.171100 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:30:42.171204 | orchestrator | 2025-05-13 23:30:42.171429 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:30:42.173276 | orchestrator | Tuesday 13 May 2025 23:30:42 +0000 (0:00:00.205) 0:00:07.798 *********** 2025-05-13 23:30:43.238704 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-05-13 23:30:43.238928 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-05-13 23:30:43.239645 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-05-13 23:30:43.241636 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-05-13 23:30:43.241679 | orchestrator | 2025-05-13 23:30:43.241693 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:30:43.241810 | orchestrator | Tuesday 13 May 2025 23:30:43 +0000 (0:00:01.068) 0:00:08.866 *********** 2025-05-13 23:30:43.436610 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:30:43.437114 | orchestrator | 2025-05-13 23:30:43.437405 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:30:43.437887 | orchestrator | Tuesday 13 May 2025 23:30:43 +0000 (0:00:00.199) 0:00:09.065 *********** 2025-05-13 23:30:43.640866 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:30:43.640974 | orchestrator | 2025-05-13 23:30:43.643445 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:30:43.643536 | orchestrator | Tuesday 13 May 2025 23:30:43 +0000 (0:00:00.201) 0:00:09.267 *********** 2025-05-13 23:30:43.838324 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:30:43.838977 | orchestrator | 2025-05-13 23:30:43.841010 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:30:43.842194 | orchestrator | Tuesday 13 May 2025 23:30:43 +0000 (0:00:00.198) 0:00:09.466 *********** 2025-05-13 23:30:44.038502 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:30:44.038910 | orchestrator | 2025-05-13 23:30:44.039765 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-05-13 23:30:44.040670 | orchestrator | Tuesday 13 May 2025 23:30:44 +0000 (0:00:00.200) 0:00:09.667 *********** 2025-05-13 23:30:44.172893 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:30:44.173150 | orchestrator | 2025-05-13 23:30:44.174116 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-05-13 23:30:44.175926 | orchestrator | Tuesday 13 May 2025 23:30:44 +0000 (0:00:00.134) 0:00:09.801 *********** 2025-05-13 23:30:44.397834 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'cf553414-fd5b-54a4-812a-8e7012220720'}}) 2025-05-13 23:30:44.399296 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9ea6307c-c51b-54ed-aeb4-48fe7d66605c'}}) 2025-05-13 23:30:44.401184 | orchestrator | 2025-05-13 23:30:44.401211 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-05-13 23:30:44.401913 | orchestrator | Tuesday 13 May 2025 23:30:44 +0000 (0:00:00.224) 0:00:10.025 *********** 2025-05-13 23:30:46.354655 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-cf553414-fd5b-54a4-812a-8e7012220720', 'data_vg': 'ceph-cf553414-fd5b-54a4-812a-8e7012220720'}) 2025-05-13 23:30:46.354775 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-9ea6307c-c51b-54ed-aeb4-48fe7d66605c', 'data_vg': 'ceph-9ea6307c-c51b-54ed-aeb4-48fe7d66605c'}) 2025-05-13 23:30:46.354850 | orchestrator | 2025-05-13 23:30:46.355632 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-05-13 23:30:46.356005 | orchestrator | Tuesday 13 May 2025 23:30:46 +0000 (0:00:01.955) 0:00:11.981 *********** 2025-05-13 23:30:46.507103 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cf553414-fd5b-54a4-812a-8e7012220720', 'data_vg': 'ceph-cf553414-fd5b-54a4-812a-8e7012220720'})  2025-05-13 23:30:46.509418 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9ea6307c-c51b-54ed-aeb4-48fe7d66605c', 'data_vg': 'ceph-9ea6307c-c51b-54ed-aeb4-48fe7d66605c'})  2025-05-13 23:30:46.510192 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:30:46.511151 | orchestrator | 2025-05-13 23:30:46.511850 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-05-13 23:30:46.512500 | orchestrator | Tuesday 13 May 2025 23:30:46 +0000 (0:00:00.153) 0:00:12.135 *********** 2025-05-13 23:30:47.923304 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-cf553414-fd5b-54a4-812a-8e7012220720', 'data_vg': 'ceph-cf553414-fd5b-54a4-812a-8e7012220720'}) 2025-05-13 23:30:47.923445 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-9ea6307c-c51b-54ed-aeb4-48fe7d66605c', 'data_vg': 'ceph-9ea6307c-c51b-54ed-aeb4-48fe7d66605c'}) 2025-05-13 23:30:47.924311 | orchestrator | 2025-05-13 23:30:47.925760 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-05-13 23:30:47.926452 | orchestrator | Tuesday 13 May 2025 23:30:47 +0000 (0:00:01.415) 0:00:13.551 *********** 2025-05-13 23:30:48.081045 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cf553414-fd5b-54a4-812a-8e7012220720', 'data_vg': 'ceph-cf553414-fd5b-54a4-812a-8e7012220720'})  2025-05-13 23:30:48.081116 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9ea6307c-c51b-54ed-aeb4-48fe7d66605c', 'data_vg': 'ceph-9ea6307c-c51b-54ed-aeb4-48fe7d66605c'})  2025-05-13 23:30:48.081432 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:30:48.082041 | orchestrator | 2025-05-13 23:30:48.082758 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-05-13 23:30:48.083225 | orchestrator | Tuesday 13 May 2025 23:30:48 +0000 (0:00:00.158) 0:00:13.709 *********** 2025-05-13 23:30:48.205852 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:30:48.205977 | orchestrator | 2025-05-13 23:30:48.206919 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-05-13 23:30:48.207689 | orchestrator | Tuesday 13 May 2025 23:30:48 +0000 (0:00:00.125) 0:00:13.834 *********** 2025-05-13 23:30:48.546451 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cf553414-fd5b-54a4-812a-8e7012220720', 'data_vg': 'ceph-cf553414-fd5b-54a4-812a-8e7012220720'})  2025-05-13 23:30:48.548424 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9ea6307c-c51b-54ed-aeb4-48fe7d66605c', 'data_vg': 'ceph-9ea6307c-c51b-54ed-aeb4-48fe7d66605c'})  2025-05-13 23:30:48.549624 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:30:48.550551 | orchestrator | 2025-05-13 23:30:48.551546 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-05-13 23:30:48.552625 | orchestrator | Tuesday 13 May 2025 23:30:48 +0000 (0:00:00.338) 0:00:14.173 *********** 2025-05-13 23:30:48.692046 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:30:48.694194 | orchestrator | 2025-05-13 23:30:48.694665 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-05-13 23:30:48.695643 | orchestrator | Tuesday 13 May 2025 23:30:48 +0000 (0:00:00.147) 0:00:14.321 *********** 2025-05-13 23:30:48.847965 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cf553414-fd5b-54a4-812a-8e7012220720', 'data_vg': 'ceph-cf553414-fd5b-54a4-812a-8e7012220720'})  2025-05-13 23:30:48.848875 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9ea6307c-c51b-54ed-aeb4-48fe7d66605c', 'data_vg': 'ceph-9ea6307c-c51b-54ed-aeb4-48fe7d66605c'})  2025-05-13 23:30:48.850550 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:30:48.851245 | orchestrator | 2025-05-13 23:30:48.851637 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-05-13 23:30:48.852460 | orchestrator | Tuesday 13 May 2025 23:30:48 +0000 (0:00:00.154) 0:00:14.475 *********** 2025-05-13 23:30:48.984524 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:30:48.986012 | orchestrator | 2025-05-13 23:30:48.986826 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-05-13 23:30:48.987667 | orchestrator | Tuesday 13 May 2025 23:30:48 +0000 (0:00:00.136) 0:00:14.612 *********** 2025-05-13 23:30:49.134581 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cf553414-fd5b-54a4-812a-8e7012220720', 'data_vg': 'ceph-cf553414-fd5b-54a4-812a-8e7012220720'})  2025-05-13 23:30:49.135087 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9ea6307c-c51b-54ed-aeb4-48fe7d66605c', 'data_vg': 'ceph-9ea6307c-c51b-54ed-aeb4-48fe7d66605c'})  2025-05-13 23:30:49.135776 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:30:49.136868 | orchestrator | 2025-05-13 23:30:49.137217 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-05-13 23:30:49.138003 | orchestrator | Tuesday 13 May 2025 23:30:49 +0000 (0:00:00.151) 0:00:14.764 *********** 2025-05-13 23:30:49.282521 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:30:49.283255 | orchestrator | 2025-05-13 23:30:49.283833 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-05-13 23:30:49.284333 | orchestrator | Tuesday 13 May 2025 23:30:49 +0000 (0:00:00.143) 0:00:14.907 *********** 2025-05-13 23:30:49.450218 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cf553414-fd5b-54a4-812a-8e7012220720', 'data_vg': 'ceph-cf553414-fd5b-54a4-812a-8e7012220720'})  2025-05-13 23:30:49.451144 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9ea6307c-c51b-54ed-aeb4-48fe7d66605c', 'data_vg': 'ceph-9ea6307c-c51b-54ed-aeb4-48fe7d66605c'})  2025-05-13 23:30:49.452960 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:30:49.454120 | orchestrator | 2025-05-13 23:30:49.456062 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-05-13 23:30:49.456356 | orchestrator | Tuesday 13 May 2025 23:30:49 +0000 (0:00:00.169) 0:00:15.077 *********** 2025-05-13 23:30:49.601243 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cf553414-fd5b-54a4-812a-8e7012220720', 'data_vg': 'ceph-cf553414-fd5b-54a4-812a-8e7012220720'})  2025-05-13 23:30:49.601809 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9ea6307c-c51b-54ed-aeb4-48fe7d66605c', 'data_vg': 'ceph-9ea6307c-c51b-54ed-aeb4-48fe7d66605c'})  2025-05-13 23:30:49.602822 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:30:49.604263 | orchestrator | 2025-05-13 23:30:49.605696 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-05-13 23:30:49.606546 | orchestrator | Tuesday 13 May 2025 23:30:49 +0000 (0:00:00.152) 0:00:15.229 *********** 2025-05-13 23:30:49.752613 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cf553414-fd5b-54a4-812a-8e7012220720', 'data_vg': 'ceph-cf553414-fd5b-54a4-812a-8e7012220720'})  2025-05-13 23:30:49.755449 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9ea6307c-c51b-54ed-aeb4-48fe7d66605c', 'data_vg': 'ceph-9ea6307c-c51b-54ed-aeb4-48fe7d66605c'})  2025-05-13 23:30:49.755493 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:30:49.755999 | orchestrator | 2025-05-13 23:30:49.756879 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-05-13 23:30:49.757743 | orchestrator | Tuesday 13 May 2025 23:30:49 +0000 (0:00:00.151) 0:00:15.381 *********** 2025-05-13 23:30:49.884470 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:30:49.885098 | orchestrator | 2025-05-13 23:30:49.886165 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-05-13 23:30:49.886655 | orchestrator | Tuesday 13 May 2025 23:30:49 +0000 (0:00:00.130) 0:00:15.511 *********** 2025-05-13 23:30:50.008708 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:30:50.010222 | orchestrator | 2025-05-13 23:30:50.010946 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-05-13 23:30:50.011290 | orchestrator | Tuesday 13 May 2025 23:30:50 +0000 (0:00:00.125) 0:00:15.637 *********** 2025-05-13 23:30:50.148961 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:30:50.149711 | orchestrator | 2025-05-13 23:30:50.150051 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-05-13 23:30:50.151009 | orchestrator | Tuesday 13 May 2025 23:30:50 +0000 (0:00:00.140) 0:00:15.777 *********** 2025-05-13 23:30:50.482121 | orchestrator | ok: [testbed-node-3] => { 2025-05-13 23:30:50.483443 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-05-13 23:30:50.484439 | orchestrator | } 2025-05-13 23:30:50.485965 | orchestrator | 2025-05-13 23:30:50.486903 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-05-13 23:30:50.487423 | orchestrator | Tuesday 13 May 2025 23:30:50 +0000 (0:00:00.331) 0:00:16.109 *********** 2025-05-13 23:30:50.629911 | orchestrator | ok: [testbed-node-3] => { 2025-05-13 23:30:50.631908 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-05-13 23:30:50.632514 | orchestrator | } 2025-05-13 23:30:50.633521 | orchestrator | 2025-05-13 23:30:50.634599 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-05-13 23:30:50.635784 | orchestrator | Tuesday 13 May 2025 23:30:50 +0000 (0:00:00.148) 0:00:16.258 *********** 2025-05-13 23:30:50.784775 | orchestrator | ok: [testbed-node-3] => { 2025-05-13 23:30:50.785233 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-05-13 23:30:50.786114 | orchestrator | } 2025-05-13 23:30:50.786485 | orchestrator | 2025-05-13 23:30:50.787137 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-05-13 23:30:50.787990 | orchestrator | Tuesday 13 May 2025 23:30:50 +0000 (0:00:00.155) 0:00:16.414 *********** 2025-05-13 23:30:51.436151 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:30:51.437115 | orchestrator | 2025-05-13 23:30:51.438218 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-05-13 23:30:51.438890 | orchestrator | Tuesday 13 May 2025 23:30:51 +0000 (0:00:00.651) 0:00:17.065 *********** 2025-05-13 23:30:51.927705 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:30:51.930149 | orchestrator | 2025-05-13 23:30:51.931135 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-05-13 23:30:51.931993 | orchestrator | Tuesday 13 May 2025 23:30:51 +0000 (0:00:00.490) 0:00:17.555 *********** 2025-05-13 23:30:52.448203 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:30:52.448310 | orchestrator | 2025-05-13 23:30:52.449891 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-05-13 23:30:52.450460 | orchestrator | Tuesday 13 May 2025 23:30:52 +0000 (0:00:00.517) 0:00:18.073 *********** 2025-05-13 23:30:52.586555 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:30:52.587546 | orchestrator | 2025-05-13 23:30:52.590499 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-05-13 23:30:52.591625 | orchestrator | Tuesday 13 May 2025 23:30:52 +0000 (0:00:00.141) 0:00:18.214 *********** 2025-05-13 23:30:52.699054 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:30:52.700761 | orchestrator | 2025-05-13 23:30:52.704637 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-05-13 23:30:52.705507 | orchestrator | Tuesday 13 May 2025 23:30:52 +0000 (0:00:00.113) 0:00:18.327 *********** 2025-05-13 23:30:52.819241 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:30:52.819777 | orchestrator | 2025-05-13 23:30:52.820789 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-05-13 23:30:52.823087 | orchestrator | Tuesday 13 May 2025 23:30:52 +0000 (0:00:00.120) 0:00:18.448 *********** 2025-05-13 23:30:52.961893 | orchestrator | ok: [testbed-node-3] => { 2025-05-13 23:30:52.963649 | orchestrator |  "vgs_report": { 2025-05-13 23:30:52.965344 | orchestrator |  "vg": [] 2025-05-13 23:30:52.966898 | orchestrator |  } 2025-05-13 23:30:52.967776 | orchestrator | } 2025-05-13 23:30:52.968010 | orchestrator | 2025-05-13 23:30:52.968504 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-05-13 23:30:52.969057 | orchestrator | Tuesday 13 May 2025 23:30:52 +0000 (0:00:00.142) 0:00:18.590 *********** 2025-05-13 23:30:53.102939 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:30:53.103117 | orchestrator | 2025-05-13 23:30:53.103710 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-05-13 23:30:53.104587 | orchestrator | Tuesday 13 May 2025 23:30:53 +0000 (0:00:00.141) 0:00:18.732 *********** 2025-05-13 23:30:53.234836 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:30:53.234926 | orchestrator | 2025-05-13 23:30:53.236036 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-05-13 23:30:53.237007 | orchestrator | Tuesday 13 May 2025 23:30:53 +0000 (0:00:00.131) 0:00:18.863 *********** 2025-05-13 23:30:53.603462 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:30:53.604953 | orchestrator | 2025-05-13 23:30:53.606006 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-05-13 23:30:53.607210 | orchestrator | Tuesday 13 May 2025 23:30:53 +0000 (0:00:00.368) 0:00:19.231 *********** 2025-05-13 23:30:53.736358 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:30:53.737637 | orchestrator | 2025-05-13 23:30:53.738569 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-05-13 23:30:53.739794 | orchestrator | Tuesday 13 May 2025 23:30:53 +0000 (0:00:00.134) 0:00:19.365 *********** 2025-05-13 23:30:53.878659 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:30:53.880185 | orchestrator | 2025-05-13 23:30:53.881491 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-05-13 23:30:53.882301 | orchestrator | Tuesday 13 May 2025 23:30:53 +0000 (0:00:00.141) 0:00:19.506 *********** 2025-05-13 23:30:54.006679 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:30:54.007618 | orchestrator | 2025-05-13 23:30:54.009148 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-05-13 23:30:54.010253 | orchestrator | Tuesday 13 May 2025 23:30:53 +0000 (0:00:00.127) 0:00:19.634 *********** 2025-05-13 23:30:54.154075 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:30:54.154181 | orchestrator | 2025-05-13 23:30:54.154197 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-05-13 23:30:54.154210 | orchestrator | Tuesday 13 May 2025 23:30:54 +0000 (0:00:00.148) 0:00:19.782 *********** 2025-05-13 23:30:54.285978 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:30:54.287032 | orchestrator | 2025-05-13 23:30:54.288764 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-05-13 23:30:54.289065 | orchestrator | Tuesday 13 May 2025 23:30:54 +0000 (0:00:00.131) 0:00:19.914 *********** 2025-05-13 23:30:54.422336 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:30:54.422829 | orchestrator | 2025-05-13 23:30:54.424071 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-05-13 23:30:54.425605 | orchestrator | Tuesday 13 May 2025 23:30:54 +0000 (0:00:00.136) 0:00:20.050 *********** 2025-05-13 23:30:54.559342 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:30:54.560701 | orchestrator | 2025-05-13 23:30:54.561030 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-05-13 23:30:54.563616 | orchestrator | Tuesday 13 May 2025 23:30:54 +0000 (0:00:00.137) 0:00:20.188 *********** 2025-05-13 23:30:54.691295 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:30:54.691969 | orchestrator | 2025-05-13 23:30:54.693838 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-05-13 23:30:54.694497 | orchestrator | Tuesday 13 May 2025 23:30:54 +0000 (0:00:00.132) 0:00:20.320 *********** 2025-05-13 23:30:54.836328 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:30:54.836996 | orchestrator | 2025-05-13 23:30:54.838528 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-05-13 23:30:54.840767 | orchestrator | Tuesday 13 May 2025 23:30:54 +0000 (0:00:00.142) 0:00:20.463 *********** 2025-05-13 23:30:54.965685 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:30:54.965978 | orchestrator | 2025-05-13 23:30:54.967569 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-05-13 23:30:54.968292 | orchestrator | Tuesday 13 May 2025 23:30:54 +0000 (0:00:00.130) 0:00:20.593 *********** 2025-05-13 23:30:55.099711 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:30:55.101401 | orchestrator | 2025-05-13 23:30:55.102894 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-05-13 23:30:55.103261 | orchestrator | Tuesday 13 May 2025 23:30:55 +0000 (0:00:00.133) 0:00:20.727 *********** 2025-05-13 23:30:55.459465 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cf553414-fd5b-54a4-812a-8e7012220720', 'data_vg': 'ceph-cf553414-fd5b-54a4-812a-8e7012220720'})  2025-05-13 23:30:55.462696 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9ea6307c-c51b-54ed-aeb4-48fe7d66605c', 'data_vg': 'ceph-9ea6307c-c51b-54ed-aeb4-48fe7d66605c'})  2025-05-13 23:30:55.463942 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:30:55.464516 | orchestrator | 2025-05-13 23:30:55.465246 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-05-13 23:30:55.466625 | orchestrator | Tuesday 13 May 2025 23:30:55 +0000 (0:00:00.359) 0:00:21.087 *********** 2025-05-13 23:30:55.620842 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cf553414-fd5b-54a4-812a-8e7012220720', 'data_vg': 'ceph-cf553414-fd5b-54a4-812a-8e7012220720'})  2025-05-13 23:30:55.620940 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9ea6307c-c51b-54ed-aeb4-48fe7d66605c', 'data_vg': 'ceph-9ea6307c-c51b-54ed-aeb4-48fe7d66605c'})  2025-05-13 23:30:55.621283 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:30:55.622598 | orchestrator | 2025-05-13 23:30:55.623355 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-05-13 23:30:55.624225 | orchestrator | Tuesday 13 May 2025 23:30:55 +0000 (0:00:00.160) 0:00:21.248 *********** 2025-05-13 23:30:55.773754 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cf553414-fd5b-54a4-812a-8e7012220720', 'data_vg': 'ceph-cf553414-fd5b-54a4-812a-8e7012220720'})  2025-05-13 23:30:55.773875 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9ea6307c-c51b-54ed-aeb4-48fe7d66605c', 'data_vg': 'ceph-9ea6307c-c51b-54ed-aeb4-48fe7d66605c'})  2025-05-13 23:30:55.775220 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:30:55.775980 | orchestrator | 2025-05-13 23:30:55.776843 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-05-13 23:30:55.777725 | orchestrator | Tuesday 13 May 2025 23:30:55 +0000 (0:00:00.153) 0:00:21.401 *********** 2025-05-13 23:30:55.924109 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cf553414-fd5b-54a4-812a-8e7012220720', 'data_vg': 'ceph-cf553414-fd5b-54a4-812a-8e7012220720'})  2025-05-13 23:30:55.925062 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9ea6307c-c51b-54ed-aeb4-48fe7d66605c', 'data_vg': 'ceph-9ea6307c-c51b-54ed-aeb4-48fe7d66605c'})  2025-05-13 23:30:55.926677 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:30:55.927629 | orchestrator | 2025-05-13 23:30:55.929045 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-05-13 23:30:55.929700 | orchestrator | Tuesday 13 May 2025 23:30:55 +0000 (0:00:00.150) 0:00:21.552 *********** 2025-05-13 23:30:56.084233 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cf553414-fd5b-54a4-812a-8e7012220720', 'data_vg': 'ceph-cf553414-fd5b-54a4-812a-8e7012220720'})  2025-05-13 23:30:56.084716 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9ea6307c-c51b-54ed-aeb4-48fe7d66605c', 'data_vg': 'ceph-9ea6307c-c51b-54ed-aeb4-48fe7d66605c'})  2025-05-13 23:30:56.085712 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:30:56.086258 | orchestrator | 2025-05-13 23:30:56.088198 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-05-13 23:30:56.088823 | orchestrator | Tuesday 13 May 2025 23:30:56 +0000 (0:00:00.160) 0:00:21.713 *********** 2025-05-13 23:30:56.232733 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cf553414-fd5b-54a4-812a-8e7012220720', 'data_vg': 'ceph-cf553414-fd5b-54a4-812a-8e7012220720'})  2025-05-13 23:30:56.233527 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9ea6307c-c51b-54ed-aeb4-48fe7d66605c', 'data_vg': 'ceph-9ea6307c-c51b-54ed-aeb4-48fe7d66605c'})  2025-05-13 23:30:56.234885 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:30:56.236546 | orchestrator | 2025-05-13 23:30:56.237802 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-05-13 23:30:56.238941 | orchestrator | Tuesday 13 May 2025 23:30:56 +0000 (0:00:00.147) 0:00:21.861 *********** 2025-05-13 23:30:56.390351 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cf553414-fd5b-54a4-812a-8e7012220720', 'data_vg': 'ceph-cf553414-fd5b-54a4-812a-8e7012220720'})  2025-05-13 23:30:56.390516 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9ea6307c-c51b-54ed-aeb4-48fe7d66605c', 'data_vg': 'ceph-9ea6307c-c51b-54ed-aeb4-48fe7d66605c'})  2025-05-13 23:30:56.391952 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:30:56.392242 | orchestrator | 2025-05-13 23:30:56.394274 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-05-13 23:30:56.395051 | orchestrator | Tuesday 13 May 2025 23:30:56 +0000 (0:00:00.156) 0:00:22.017 *********** 2025-05-13 23:30:56.550147 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cf553414-fd5b-54a4-812a-8e7012220720', 'data_vg': 'ceph-cf553414-fd5b-54a4-812a-8e7012220720'})  2025-05-13 23:30:56.550343 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9ea6307c-c51b-54ed-aeb4-48fe7d66605c', 'data_vg': 'ceph-9ea6307c-c51b-54ed-aeb4-48fe7d66605c'})  2025-05-13 23:30:56.550363 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:30:56.551258 | orchestrator | 2025-05-13 23:30:56.551670 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-05-13 23:30:56.554653 | orchestrator | Tuesday 13 May 2025 23:30:56 +0000 (0:00:00.160) 0:00:22.178 *********** 2025-05-13 23:30:57.078133 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:30:57.078660 | orchestrator | 2025-05-13 23:30:57.079909 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-05-13 23:30:57.081854 | orchestrator | Tuesday 13 May 2025 23:30:57 +0000 (0:00:00.528) 0:00:22.707 *********** 2025-05-13 23:30:57.595576 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:30:57.595806 | orchestrator | 2025-05-13 23:30:57.597337 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-05-13 23:30:57.597897 | orchestrator | Tuesday 13 May 2025 23:30:57 +0000 (0:00:00.515) 0:00:23.222 *********** 2025-05-13 23:30:57.735014 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:30:57.736058 | orchestrator | 2025-05-13 23:30:57.736864 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-05-13 23:30:57.737361 | orchestrator | Tuesday 13 May 2025 23:30:57 +0000 (0:00:00.140) 0:00:23.362 *********** 2025-05-13 23:30:57.901759 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-9ea6307c-c51b-54ed-aeb4-48fe7d66605c', 'vg_name': 'ceph-9ea6307c-c51b-54ed-aeb4-48fe7d66605c'}) 2025-05-13 23:30:57.902577 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-cf553414-fd5b-54a4-812a-8e7012220720', 'vg_name': 'ceph-cf553414-fd5b-54a4-812a-8e7012220720'}) 2025-05-13 23:30:57.903833 | orchestrator | 2025-05-13 23:30:57.908051 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-05-13 23:30:57.908189 | orchestrator | Tuesday 13 May 2025 23:30:57 +0000 (0:00:00.167) 0:00:23.530 *********** 2025-05-13 23:30:58.298574 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cf553414-fd5b-54a4-812a-8e7012220720', 'data_vg': 'ceph-cf553414-fd5b-54a4-812a-8e7012220720'})  2025-05-13 23:30:58.298678 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9ea6307c-c51b-54ed-aeb4-48fe7d66605c', 'data_vg': 'ceph-9ea6307c-c51b-54ed-aeb4-48fe7d66605c'})  2025-05-13 23:30:58.299846 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:30:58.301548 | orchestrator | 2025-05-13 23:30:58.302357 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-05-13 23:30:58.302533 | orchestrator | Tuesday 13 May 2025 23:30:58 +0000 (0:00:00.397) 0:00:23.927 *********** 2025-05-13 23:30:58.447041 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cf553414-fd5b-54a4-812a-8e7012220720', 'data_vg': 'ceph-cf553414-fd5b-54a4-812a-8e7012220720'})  2025-05-13 23:30:58.455587 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9ea6307c-c51b-54ed-aeb4-48fe7d66605c', 'data_vg': 'ceph-9ea6307c-c51b-54ed-aeb4-48fe7d66605c'})  2025-05-13 23:30:58.455712 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:30:58.455729 | orchestrator | 2025-05-13 23:30:58.455741 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-05-13 23:30:58.455754 | orchestrator | Tuesday 13 May 2025 23:30:58 +0000 (0:00:00.146) 0:00:24.073 *********** 2025-05-13 23:30:58.620898 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cf553414-fd5b-54a4-812a-8e7012220720', 'data_vg': 'ceph-cf553414-fd5b-54a4-812a-8e7012220720'})  2025-05-13 23:30:58.621240 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9ea6307c-c51b-54ed-aeb4-48fe7d66605c', 'data_vg': 'ceph-9ea6307c-c51b-54ed-aeb4-48fe7d66605c'})  2025-05-13 23:30:58.621961 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:30:58.622929 | orchestrator | 2025-05-13 23:30:58.623761 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-05-13 23:30:58.624448 | orchestrator | Tuesday 13 May 2025 23:30:58 +0000 (0:00:00.177) 0:00:24.250 *********** 2025-05-13 23:30:58.922906 | orchestrator | ok: [testbed-node-3] => { 2025-05-13 23:30:58.923008 | orchestrator |  "lvm_report": { 2025-05-13 23:30:58.925392 | orchestrator |  "lv": [ 2025-05-13 23:30:58.927335 | orchestrator |  { 2025-05-13 23:30:58.928818 | orchestrator |  "lv_name": "osd-block-9ea6307c-c51b-54ed-aeb4-48fe7d66605c", 2025-05-13 23:30:58.929534 | orchestrator |  "vg_name": "ceph-9ea6307c-c51b-54ed-aeb4-48fe7d66605c" 2025-05-13 23:30:58.930809 | orchestrator |  }, 2025-05-13 23:30:58.932015 | orchestrator |  { 2025-05-13 23:30:58.932729 | orchestrator |  "lv_name": "osd-block-cf553414-fd5b-54a4-812a-8e7012220720", 2025-05-13 23:30:58.933913 | orchestrator |  "vg_name": "ceph-cf553414-fd5b-54a4-812a-8e7012220720" 2025-05-13 23:30:58.934924 | orchestrator |  } 2025-05-13 23:30:58.935346 | orchestrator |  ], 2025-05-13 23:30:58.936554 | orchestrator |  "pv": [ 2025-05-13 23:30:58.937390 | orchestrator |  { 2025-05-13 23:30:58.937839 | orchestrator |  "pv_name": "/dev/sdb", 2025-05-13 23:30:58.938583 | orchestrator |  "vg_name": "ceph-cf553414-fd5b-54a4-812a-8e7012220720" 2025-05-13 23:30:58.939640 | orchestrator |  }, 2025-05-13 23:30:58.940032 | orchestrator |  { 2025-05-13 23:30:58.940858 | orchestrator |  "pv_name": "/dev/sdc", 2025-05-13 23:30:58.941464 | orchestrator |  "vg_name": "ceph-9ea6307c-c51b-54ed-aeb4-48fe7d66605c" 2025-05-13 23:30:58.942137 | orchestrator |  } 2025-05-13 23:30:58.942636 | orchestrator |  ] 2025-05-13 23:30:58.943251 | orchestrator |  } 2025-05-13 23:30:58.943928 | orchestrator | } 2025-05-13 23:30:58.944390 | orchestrator | 2025-05-13 23:30:58.944938 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-05-13 23:30:58.945306 | orchestrator | 2025-05-13 23:30:58.945990 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-13 23:30:58.946322 | orchestrator | Tuesday 13 May 2025 23:30:58 +0000 (0:00:00.298) 0:00:24.549 *********** 2025-05-13 23:30:59.215812 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-05-13 23:30:59.215965 | orchestrator | 2025-05-13 23:30:59.215995 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-13 23:30:59.216101 | orchestrator | Tuesday 13 May 2025 23:30:59 +0000 (0:00:00.292) 0:00:24.842 *********** 2025-05-13 23:30:59.449998 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:30:59.450656 | orchestrator | 2025-05-13 23:30:59.452617 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:30:59.453751 | orchestrator | Tuesday 13 May 2025 23:30:59 +0000 (0:00:00.235) 0:00:25.077 *********** 2025-05-13 23:30:59.876636 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-05-13 23:30:59.876741 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-05-13 23:30:59.877649 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-05-13 23:30:59.879220 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-05-13 23:30:59.879256 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-05-13 23:30:59.879277 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-05-13 23:30:59.879296 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-05-13 23:30:59.879960 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-05-13 23:30:59.880161 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-05-13 23:30:59.880819 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-05-13 23:30:59.881198 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-05-13 23:30:59.881786 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-05-13 23:30:59.882205 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-05-13 23:30:59.882739 | orchestrator | 2025-05-13 23:30:59.882966 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:30:59.883391 | orchestrator | Tuesday 13 May 2025 23:30:59 +0000 (0:00:00.426) 0:00:25.504 *********** 2025-05-13 23:31:00.081251 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:31:00.082769 | orchestrator | 2025-05-13 23:31:00.083935 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:31:00.085246 | orchestrator | Tuesday 13 May 2025 23:31:00 +0000 (0:00:00.204) 0:00:25.709 *********** 2025-05-13 23:31:00.283148 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:31:00.284012 | orchestrator | 2025-05-13 23:31:00.285644 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:31:00.286365 | orchestrator | Tuesday 13 May 2025 23:31:00 +0000 (0:00:00.203) 0:00:25.912 *********** 2025-05-13 23:31:00.913618 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:31:00.915741 | orchestrator | 2025-05-13 23:31:00.916508 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:31:00.917258 | orchestrator | Tuesday 13 May 2025 23:31:00 +0000 (0:00:00.627) 0:00:26.540 *********** 2025-05-13 23:31:01.127595 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:31:01.128156 | orchestrator | 2025-05-13 23:31:01.128877 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:31:01.129781 | orchestrator | Tuesday 13 May 2025 23:31:01 +0000 (0:00:00.216) 0:00:26.756 *********** 2025-05-13 23:31:01.339889 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:31:01.340551 | orchestrator | 2025-05-13 23:31:01.341520 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:31:01.342610 | orchestrator | Tuesday 13 May 2025 23:31:01 +0000 (0:00:00.211) 0:00:26.968 *********** 2025-05-13 23:31:01.549032 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:31:01.550300 | orchestrator | 2025-05-13 23:31:01.550712 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:31:01.552197 | orchestrator | Tuesday 13 May 2025 23:31:01 +0000 (0:00:00.209) 0:00:27.177 *********** 2025-05-13 23:31:01.751556 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:31:01.751733 | orchestrator | 2025-05-13 23:31:01.754182 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:31:01.754794 | orchestrator | Tuesday 13 May 2025 23:31:01 +0000 (0:00:00.201) 0:00:27.378 *********** 2025-05-13 23:31:01.961369 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:31:01.962496 | orchestrator | 2025-05-13 23:31:01.964101 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:31:01.965787 | orchestrator | Tuesday 13 May 2025 23:31:01 +0000 (0:00:00.211) 0:00:27.589 *********** 2025-05-13 23:31:02.378399 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_318ab0b7-de56-4f87-ab50-209f607532c7) 2025-05-13 23:31:02.380122 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_318ab0b7-de56-4f87-ab50-209f607532c7) 2025-05-13 23:31:02.381461 | orchestrator | 2025-05-13 23:31:02.383253 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:31:02.384034 | orchestrator | Tuesday 13 May 2025 23:31:02 +0000 (0:00:00.417) 0:00:28.007 *********** 2025-05-13 23:31:02.802091 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_c475673a-0096-49dd-a2ab-dba7e6677c05) 2025-05-13 23:31:02.802705 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_c475673a-0096-49dd-a2ab-dba7e6677c05) 2025-05-13 23:31:02.803558 | orchestrator | 2025-05-13 23:31:02.803906 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:31:02.804629 | orchestrator | Tuesday 13 May 2025 23:31:02 +0000 (0:00:00.420) 0:00:28.428 *********** 2025-05-13 23:31:03.212775 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_a5357627-6c2a-405a-984b-26b28125b648) 2025-05-13 23:31:03.214158 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_a5357627-6c2a-405a-984b-26b28125b648) 2025-05-13 23:31:03.215979 | orchestrator | 2025-05-13 23:31:03.217427 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:31:03.218218 | orchestrator | Tuesday 13 May 2025 23:31:03 +0000 (0:00:00.413) 0:00:28.841 *********** 2025-05-13 23:31:03.647510 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_0156a383-42b8-4f65-bebb-758e8d549677) 2025-05-13 23:31:03.647656 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_0156a383-42b8-4f65-bebb-758e8d549677) 2025-05-13 23:31:03.648660 | orchestrator | 2025-05-13 23:31:03.649948 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:31:03.650990 | orchestrator | Tuesday 13 May 2025 23:31:03 +0000 (0:00:00.434) 0:00:29.275 *********** 2025-05-13 23:31:03.967241 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-13 23:31:03.967396 | orchestrator | 2025-05-13 23:31:03.969074 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:31:03.970052 | orchestrator | Tuesday 13 May 2025 23:31:03 +0000 (0:00:00.318) 0:00:29.594 *********** 2025-05-13 23:31:04.601412 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-05-13 23:31:04.601839 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-05-13 23:31:04.602242 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-05-13 23:31:04.603148 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-05-13 23:31:04.604991 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-05-13 23:31:04.606288 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-05-13 23:31:04.607627 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-05-13 23:31:04.608624 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-05-13 23:31:04.609684 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-05-13 23:31:04.610299 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-05-13 23:31:04.611431 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-05-13 23:31:04.612200 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-05-13 23:31:04.612949 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-05-13 23:31:04.613643 | orchestrator | 2025-05-13 23:31:04.614565 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:31:04.615359 | orchestrator | Tuesday 13 May 2025 23:31:04 +0000 (0:00:00.634) 0:00:30.229 *********** 2025-05-13 23:31:04.800169 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:31:04.800286 | orchestrator | 2025-05-13 23:31:04.800655 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:31:04.800959 | orchestrator | Tuesday 13 May 2025 23:31:04 +0000 (0:00:00.197) 0:00:30.427 *********** 2025-05-13 23:31:05.063014 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:31:05.063620 | orchestrator | 2025-05-13 23:31:05.064633 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:31:05.065879 | orchestrator | Tuesday 13 May 2025 23:31:05 +0000 (0:00:00.263) 0:00:30.690 *********** 2025-05-13 23:31:05.271601 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:31:05.275606 | orchestrator | 2025-05-13 23:31:05.275671 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:31:05.276481 | orchestrator | Tuesday 13 May 2025 23:31:05 +0000 (0:00:00.208) 0:00:30.899 *********** 2025-05-13 23:31:05.483196 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:31:05.484364 | orchestrator | 2025-05-13 23:31:05.485132 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:31:05.485822 | orchestrator | Tuesday 13 May 2025 23:31:05 +0000 (0:00:00.212) 0:00:31.112 *********** 2025-05-13 23:31:05.679818 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:31:05.682244 | orchestrator | 2025-05-13 23:31:05.682672 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:31:05.683497 | orchestrator | Tuesday 13 May 2025 23:31:05 +0000 (0:00:00.195) 0:00:31.307 *********** 2025-05-13 23:31:05.878928 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:31:05.879996 | orchestrator | 2025-05-13 23:31:05.880661 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:31:05.882332 | orchestrator | Tuesday 13 May 2025 23:31:05 +0000 (0:00:00.197) 0:00:31.505 *********** 2025-05-13 23:31:06.070876 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:31:06.072388 | orchestrator | 2025-05-13 23:31:06.072935 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:31:06.074007 | orchestrator | Tuesday 13 May 2025 23:31:06 +0000 (0:00:00.192) 0:00:31.697 *********** 2025-05-13 23:31:06.255045 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:31:06.255675 | orchestrator | 2025-05-13 23:31:06.257170 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:31:06.257637 | orchestrator | Tuesday 13 May 2025 23:31:06 +0000 (0:00:00.186) 0:00:31.884 *********** 2025-05-13 23:31:07.168526 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-05-13 23:31:07.169033 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-05-13 23:31:07.169950 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-05-13 23:31:07.170921 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-05-13 23:31:07.171675 | orchestrator | 2025-05-13 23:31:07.172644 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:31:07.173695 | orchestrator | Tuesday 13 May 2025 23:31:07 +0000 (0:00:00.912) 0:00:32.796 *********** 2025-05-13 23:31:07.368871 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:31:07.369166 | orchestrator | 2025-05-13 23:31:07.369895 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:31:07.370314 | orchestrator | Tuesday 13 May 2025 23:31:07 +0000 (0:00:00.201) 0:00:32.998 *********** 2025-05-13 23:31:07.552041 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:31:07.552163 | orchestrator | 2025-05-13 23:31:07.553437 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:31:07.554219 | orchestrator | Tuesday 13 May 2025 23:31:07 +0000 (0:00:00.180) 0:00:33.178 *********** 2025-05-13 23:31:08.165983 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:31:08.166231 | orchestrator | 2025-05-13 23:31:08.167148 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:31:08.168907 | orchestrator | Tuesday 13 May 2025 23:31:08 +0000 (0:00:00.614) 0:00:33.793 *********** 2025-05-13 23:31:08.363552 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:31:08.363778 | orchestrator | 2025-05-13 23:31:08.364602 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-05-13 23:31:08.365342 | orchestrator | Tuesday 13 May 2025 23:31:08 +0000 (0:00:00.198) 0:00:33.992 *********** 2025-05-13 23:31:08.504806 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:31:08.505247 | orchestrator | 2025-05-13 23:31:08.506424 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-05-13 23:31:08.506888 | orchestrator | Tuesday 13 May 2025 23:31:08 +0000 (0:00:00.141) 0:00:34.134 *********** 2025-05-13 23:31:08.690918 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8f56c737-ae06-5042-be62-d4d7430a3913'}}) 2025-05-13 23:31:08.691128 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b9ab4848-02bd-5b2a-a6cc-ded55503b6b3'}}) 2025-05-13 23:31:08.691152 | orchestrator | 2025-05-13 23:31:08.691691 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-05-13 23:31:08.692005 | orchestrator | Tuesday 13 May 2025 23:31:08 +0000 (0:00:00.186) 0:00:34.320 *********** 2025-05-13 23:31:10.541800 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-8f56c737-ae06-5042-be62-d4d7430a3913', 'data_vg': 'ceph-8f56c737-ae06-5042-be62-d4d7430a3913'}) 2025-05-13 23:31:10.543399 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-b9ab4848-02bd-5b2a-a6cc-ded55503b6b3', 'data_vg': 'ceph-b9ab4848-02bd-5b2a-a6cc-ded55503b6b3'}) 2025-05-13 23:31:10.544714 | orchestrator | 2025-05-13 23:31:10.545792 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-05-13 23:31:10.546439 | orchestrator | Tuesday 13 May 2025 23:31:10 +0000 (0:00:01.847) 0:00:36.168 *********** 2025-05-13 23:31:10.702723 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8f56c737-ae06-5042-be62-d4d7430a3913', 'data_vg': 'ceph-8f56c737-ae06-5042-be62-d4d7430a3913'})  2025-05-13 23:31:10.704078 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b9ab4848-02bd-5b2a-a6cc-ded55503b6b3', 'data_vg': 'ceph-b9ab4848-02bd-5b2a-a6cc-ded55503b6b3'})  2025-05-13 23:31:10.706234 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:31:10.707742 | orchestrator | 2025-05-13 23:31:10.708980 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-05-13 23:31:10.709816 | orchestrator | Tuesday 13 May 2025 23:31:10 +0000 (0:00:00.162) 0:00:36.330 *********** 2025-05-13 23:31:11.965240 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-8f56c737-ae06-5042-be62-d4d7430a3913', 'data_vg': 'ceph-8f56c737-ae06-5042-be62-d4d7430a3913'}) 2025-05-13 23:31:11.965386 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-b9ab4848-02bd-5b2a-a6cc-ded55503b6b3', 'data_vg': 'ceph-b9ab4848-02bd-5b2a-a6cc-ded55503b6b3'}) 2025-05-13 23:31:11.966008 | orchestrator | 2025-05-13 23:31:11.967792 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-05-13 23:31:11.968149 | orchestrator | Tuesday 13 May 2025 23:31:11 +0000 (0:00:01.261) 0:00:37.591 *********** 2025-05-13 23:31:12.116963 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8f56c737-ae06-5042-be62-d4d7430a3913', 'data_vg': 'ceph-8f56c737-ae06-5042-be62-d4d7430a3913'})  2025-05-13 23:31:12.117709 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b9ab4848-02bd-5b2a-a6cc-ded55503b6b3', 'data_vg': 'ceph-b9ab4848-02bd-5b2a-a6cc-ded55503b6b3'})  2025-05-13 23:31:12.119105 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:31:12.119941 | orchestrator | 2025-05-13 23:31:12.120698 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-05-13 23:31:12.121796 | orchestrator | Tuesday 13 May 2025 23:31:12 +0000 (0:00:00.152) 0:00:37.743 *********** 2025-05-13 23:31:12.250601 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:31:12.252100 | orchestrator | 2025-05-13 23:31:12.253570 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-05-13 23:31:12.254539 | orchestrator | Tuesday 13 May 2025 23:31:12 +0000 (0:00:00.134) 0:00:37.878 *********** 2025-05-13 23:31:12.399055 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8f56c737-ae06-5042-be62-d4d7430a3913', 'data_vg': 'ceph-8f56c737-ae06-5042-be62-d4d7430a3913'})  2025-05-13 23:31:12.400224 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b9ab4848-02bd-5b2a-a6cc-ded55503b6b3', 'data_vg': 'ceph-b9ab4848-02bd-5b2a-a6cc-ded55503b6b3'})  2025-05-13 23:31:12.400951 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:31:12.402341 | orchestrator | 2025-05-13 23:31:12.402488 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-05-13 23:31:12.403335 | orchestrator | Tuesday 13 May 2025 23:31:12 +0000 (0:00:00.147) 0:00:38.026 *********** 2025-05-13 23:31:12.539996 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:31:12.540674 | orchestrator | 2025-05-13 23:31:12.542011 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-05-13 23:31:12.543479 | orchestrator | Tuesday 13 May 2025 23:31:12 +0000 (0:00:00.142) 0:00:38.168 *********** 2025-05-13 23:31:12.685878 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8f56c737-ae06-5042-be62-d4d7430a3913', 'data_vg': 'ceph-8f56c737-ae06-5042-be62-d4d7430a3913'})  2025-05-13 23:31:12.686745 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b9ab4848-02bd-5b2a-a6cc-ded55503b6b3', 'data_vg': 'ceph-b9ab4848-02bd-5b2a-a6cc-ded55503b6b3'})  2025-05-13 23:31:12.687922 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:31:12.688880 | orchestrator | 2025-05-13 23:31:12.689728 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-05-13 23:31:12.690340 | orchestrator | Tuesday 13 May 2025 23:31:12 +0000 (0:00:00.144) 0:00:38.312 *********** 2025-05-13 23:31:13.086413 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:31:13.087917 | orchestrator | 2025-05-13 23:31:13.089214 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-05-13 23:31:13.090377 | orchestrator | Tuesday 13 May 2025 23:31:13 +0000 (0:00:00.402) 0:00:38.715 *********** 2025-05-13 23:31:13.252189 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8f56c737-ae06-5042-be62-d4d7430a3913', 'data_vg': 'ceph-8f56c737-ae06-5042-be62-d4d7430a3913'})  2025-05-13 23:31:13.253067 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b9ab4848-02bd-5b2a-a6cc-ded55503b6b3', 'data_vg': 'ceph-b9ab4848-02bd-5b2a-a6cc-ded55503b6b3'})  2025-05-13 23:31:13.253677 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:31:13.256045 | orchestrator | 2025-05-13 23:31:13.256083 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-05-13 23:31:13.256627 | orchestrator | Tuesday 13 May 2025 23:31:13 +0000 (0:00:00.165) 0:00:38.881 *********** 2025-05-13 23:31:13.400326 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:31:13.401968 | orchestrator | 2025-05-13 23:31:13.402828 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-05-13 23:31:13.404398 | orchestrator | Tuesday 13 May 2025 23:31:13 +0000 (0:00:00.147) 0:00:39.028 *********** 2025-05-13 23:31:13.546005 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8f56c737-ae06-5042-be62-d4d7430a3913', 'data_vg': 'ceph-8f56c737-ae06-5042-be62-d4d7430a3913'})  2025-05-13 23:31:13.548152 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b9ab4848-02bd-5b2a-a6cc-ded55503b6b3', 'data_vg': 'ceph-b9ab4848-02bd-5b2a-a6cc-ded55503b6b3'})  2025-05-13 23:31:13.548917 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:31:13.550386 | orchestrator | 2025-05-13 23:31:13.551817 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-05-13 23:31:13.552607 | orchestrator | Tuesday 13 May 2025 23:31:13 +0000 (0:00:00.145) 0:00:39.174 *********** 2025-05-13 23:31:13.709021 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8f56c737-ae06-5042-be62-d4d7430a3913', 'data_vg': 'ceph-8f56c737-ae06-5042-be62-d4d7430a3913'})  2025-05-13 23:31:13.709756 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b9ab4848-02bd-5b2a-a6cc-ded55503b6b3', 'data_vg': 'ceph-b9ab4848-02bd-5b2a-a6cc-ded55503b6b3'})  2025-05-13 23:31:13.711396 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:31:13.711902 | orchestrator | 2025-05-13 23:31:13.712891 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-05-13 23:31:13.713326 | orchestrator | Tuesday 13 May 2025 23:31:13 +0000 (0:00:00.160) 0:00:39.335 *********** 2025-05-13 23:31:13.859071 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8f56c737-ae06-5042-be62-d4d7430a3913', 'data_vg': 'ceph-8f56c737-ae06-5042-be62-d4d7430a3913'})  2025-05-13 23:31:13.860289 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b9ab4848-02bd-5b2a-a6cc-ded55503b6b3', 'data_vg': 'ceph-b9ab4848-02bd-5b2a-a6cc-ded55503b6b3'})  2025-05-13 23:31:13.860418 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:31:13.861484 | orchestrator | 2025-05-13 23:31:13.862242 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-05-13 23:31:13.863902 | orchestrator | Tuesday 13 May 2025 23:31:13 +0000 (0:00:00.152) 0:00:39.487 *********** 2025-05-13 23:31:14.017948 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:31:14.018131 | orchestrator | 2025-05-13 23:31:14.019636 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-05-13 23:31:14.020755 | orchestrator | Tuesday 13 May 2025 23:31:14 +0000 (0:00:00.157) 0:00:39.645 *********** 2025-05-13 23:31:14.147840 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:31:14.147969 | orchestrator | 2025-05-13 23:31:14.148533 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-05-13 23:31:14.149444 | orchestrator | Tuesday 13 May 2025 23:31:14 +0000 (0:00:00.131) 0:00:39.777 *********** 2025-05-13 23:31:14.284818 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:31:14.285038 | orchestrator | 2025-05-13 23:31:14.286651 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-05-13 23:31:14.287757 | orchestrator | Tuesday 13 May 2025 23:31:14 +0000 (0:00:00.136) 0:00:39.913 *********** 2025-05-13 23:31:14.425892 | orchestrator | ok: [testbed-node-4] => { 2025-05-13 23:31:14.426585 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-05-13 23:31:14.427552 | orchestrator | } 2025-05-13 23:31:14.428767 | orchestrator | 2025-05-13 23:31:14.429610 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-05-13 23:31:14.430726 | orchestrator | Tuesday 13 May 2025 23:31:14 +0000 (0:00:00.141) 0:00:40.054 *********** 2025-05-13 23:31:14.581066 | orchestrator | ok: [testbed-node-4] => { 2025-05-13 23:31:14.581796 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-05-13 23:31:14.582390 | orchestrator | } 2025-05-13 23:31:14.583067 | orchestrator | 2025-05-13 23:31:14.584773 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-05-13 23:31:14.584792 | orchestrator | Tuesday 13 May 2025 23:31:14 +0000 (0:00:00.155) 0:00:40.209 *********** 2025-05-13 23:31:14.725953 | orchestrator | ok: [testbed-node-4] => { 2025-05-13 23:31:14.726274 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-05-13 23:31:14.727745 | orchestrator | } 2025-05-13 23:31:14.727906 | orchestrator | 2025-05-13 23:31:14.728802 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-05-13 23:31:14.729522 | orchestrator | Tuesday 13 May 2025 23:31:14 +0000 (0:00:00.142) 0:00:40.352 *********** 2025-05-13 23:31:15.453660 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:31:15.454617 | orchestrator | 2025-05-13 23:31:15.455638 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-05-13 23:31:15.458103 | orchestrator | Tuesday 13 May 2025 23:31:15 +0000 (0:00:00.729) 0:00:41.082 *********** 2025-05-13 23:31:15.969578 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:31:15.969690 | orchestrator | 2025-05-13 23:31:15.970267 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-05-13 23:31:15.970975 | orchestrator | Tuesday 13 May 2025 23:31:15 +0000 (0:00:00.514) 0:00:41.596 *********** 2025-05-13 23:31:16.509520 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:31:16.510312 | orchestrator | 2025-05-13 23:31:16.510355 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-05-13 23:31:16.510927 | orchestrator | Tuesday 13 May 2025 23:31:16 +0000 (0:00:00.541) 0:00:42.138 *********** 2025-05-13 23:31:16.669615 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:31:16.670397 | orchestrator | 2025-05-13 23:31:16.671736 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-05-13 23:31:16.672541 | orchestrator | Tuesday 13 May 2025 23:31:16 +0000 (0:00:00.158) 0:00:42.296 *********** 2025-05-13 23:31:16.788074 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:31:16.791134 | orchestrator | 2025-05-13 23:31:16.791680 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-05-13 23:31:16.792845 | orchestrator | Tuesday 13 May 2025 23:31:16 +0000 (0:00:00.117) 0:00:42.414 *********** 2025-05-13 23:31:16.915166 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:31:16.917172 | orchestrator | 2025-05-13 23:31:16.918667 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-05-13 23:31:16.920249 | orchestrator | Tuesday 13 May 2025 23:31:16 +0000 (0:00:00.129) 0:00:42.543 *********** 2025-05-13 23:31:17.095794 | orchestrator | ok: [testbed-node-4] => { 2025-05-13 23:31:17.097852 | orchestrator |  "vgs_report": { 2025-05-13 23:31:17.098900 | orchestrator |  "vg": [] 2025-05-13 23:31:17.102172 | orchestrator |  } 2025-05-13 23:31:17.104872 | orchestrator | } 2025-05-13 23:31:17.104916 | orchestrator | 2025-05-13 23:31:17.104926 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-05-13 23:31:17.105541 | orchestrator | Tuesday 13 May 2025 23:31:17 +0000 (0:00:00.179) 0:00:42.723 *********** 2025-05-13 23:31:17.235888 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:31:17.238141 | orchestrator | 2025-05-13 23:31:17.239077 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-05-13 23:31:17.240372 | orchestrator | Tuesday 13 May 2025 23:31:17 +0000 (0:00:00.139) 0:00:42.863 *********** 2025-05-13 23:31:17.368094 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:31:17.368202 | orchestrator | 2025-05-13 23:31:17.368219 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-05-13 23:31:17.368746 | orchestrator | Tuesday 13 May 2025 23:31:17 +0000 (0:00:00.131) 0:00:42.994 *********** 2025-05-13 23:31:17.515503 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:31:17.516976 | orchestrator | 2025-05-13 23:31:17.517987 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-05-13 23:31:17.519562 | orchestrator | Tuesday 13 May 2025 23:31:17 +0000 (0:00:00.149) 0:00:43.144 *********** 2025-05-13 23:31:17.647672 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:31:17.647891 | orchestrator | 2025-05-13 23:31:17.648970 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-05-13 23:31:17.649750 | orchestrator | Tuesday 13 May 2025 23:31:17 +0000 (0:00:00.132) 0:00:43.276 *********** 2025-05-13 23:31:17.794609 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:31:17.796051 | orchestrator | 2025-05-13 23:31:17.797107 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-05-13 23:31:17.798067 | orchestrator | Tuesday 13 May 2025 23:31:17 +0000 (0:00:00.143) 0:00:43.420 *********** 2025-05-13 23:31:18.162422 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:31:18.162596 | orchestrator | 2025-05-13 23:31:18.162685 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-05-13 23:31:18.163611 | orchestrator | Tuesday 13 May 2025 23:31:18 +0000 (0:00:00.370) 0:00:43.791 *********** 2025-05-13 23:31:18.297783 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:31:18.300068 | orchestrator | 2025-05-13 23:31:18.300503 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-05-13 23:31:18.301631 | orchestrator | Tuesday 13 May 2025 23:31:18 +0000 (0:00:00.134) 0:00:43.925 *********** 2025-05-13 23:31:18.457920 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:31:18.458073 | orchestrator | 2025-05-13 23:31:18.459240 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-05-13 23:31:18.460144 | orchestrator | Tuesday 13 May 2025 23:31:18 +0000 (0:00:00.160) 0:00:44.086 *********** 2025-05-13 23:31:18.592887 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:31:18.593194 | orchestrator | 2025-05-13 23:31:18.594685 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-05-13 23:31:18.595395 | orchestrator | Tuesday 13 May 2025 23:31:18 +0000 (0:00:00.134) 0:00:44.221 *********** 2025-05-13 23:31:18.736094 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:31:18.736542 | orchestrator | 2025-05-13 23:31:18.737554 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-05-13 23:31:18.739614 | orchestrator | Tuesday 13 May 2025 23:31:18 +0000 (0:00:00.141) 0:00:44.362 *********** 2025-05-13 23:31:18.878363 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:31:18.878981 | orchestrator | 2025-05-13 23:31:18.880205 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-05-13 23:31:18.881165 | orchestrator | Tuesday 13 May 2025 23:31:18 +0000 (0:00:00.143) 0:00:44.506 *********** 2025-05-13 23:31:19.017846 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:31:19.018359 | orchestrator | 2025-05-13 23:31:19.018702 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-05-13 23:31:19.019423 | orchestrator | Tuesday 13 May 2025 23:31:19 +0000 (0:00:00.140) 0:00:44.646 *********** 2025-05-13 23:31:19.145284 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:31:19.145679 | orchestrator | 2025-05-13 23:31:19.146364 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-05-13 23:31:19.147229 | orchestrator | Tuesday 13 May 2025 23:31:19 +0000 (0:00:00.128) 0:00:44.774 *********** 2025-05-13 23:31:19.316881 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:31:19.317134 | orchestrator | 2025-05-13 23:31:19.318632 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-05-13 23:31:19.320644 | orchestrator | Tuesday 13 May 2025 23:31:19 +0000 (0:00:00.170) 0:00:44.945 *********** 2025-05-13 23:31:19.466216 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8f56c737-ae06-5042-be62-d4d7430a3913', 'data_vg': 'ceph-8f56c737-ae06-5042-be62-d4d7430a3913'})  2025-05-13 23:31:19.467117 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b9ab4848-02bd-5b2a-a6cc-ded55503b6b3', 'data_vg': 'ceph-b9ab4848-02bd-5b2a-a6cc-ded55503b6b3'})  2025-05-13 23:31:19.468839 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:31:19.469716 | orchestrator | 2025-05-13 23:31:19.471679 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-05-13 23:31:19.472271 | orchestrator | Tuesday 13 May 2025 23:31:19 +0000 (0:00:00.149) 0:00:45.094 *********** 2025-05-13 23:31:19.625175 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8f56c737-ae06-5042-be62-d4d7430a3913', 'data_vg': 'ceph-8f56c737-ae06-5042-be62-d4d7430a3913'})  2025-05-13 23:31:19.625294 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b9ab4848-02bd-5b2a-a6cc-ded55503b6b3', 'data_vg': 'ceph-b9ab4848-02bd-5b2a-a6cc-ded55503b6b3'})  2025-05-13 23:31:19.625795 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:31:19.626804 | orchestrator | 2025-05-13 23:31:19.627978 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-05-13 23:31:19.629299 | orchestrator | Tuesday 13 May 2025 23:31:19 +0000 (0:00:00.158) 0:00:45.253 *********** 2025-05-13 23:31:19.798649 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8f56c737-ae06-5042-be62-d4d7430a3913', 'data_vg': 'ceph-8f56c737-ae06-5042-be62-d4d7430a3913'})  2025-05-13 23:31:19.799302 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b9ab4848-02bd-5b2a-a6cc-ded55503b6b3', 'data_vg': 'ceph-b9ab4848-02bd-5b2a-a6cc-ded55503b6b3'})  2025-05-13 23:31:19.800409 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:31:19.801790 | orchestrator | 2025-05-13 23:31:19.802986 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-05-13 23:31:19.803819 | orchestrator | Tuesday 13 May 2025 23:31:19 +0000 (0:00:00.172) 0:00:45.426 *********** 2025-05-13 23:31:20.155608 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8f56c737-ae06-5042-be62-d4d7430a3913', 'data_vg': 'ceph-8f56c737-ae06-5042-be62-d4d7430a3913'})  2025-05-13 23:31:20.156201 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b9ab4848-02bd-5b2a-a6cc-ded55503b6b3', 'data_vg': 'ceph-b9ab4848-02bd-5b2a-a6cc-ded55503b6b3'})  2025-05-13 23:31:20.157674 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:31:20.157699 | orchestrator | 2025-05-13 23:31:20.159623 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-05-13 23:31:20.160024 | orchestrator | Tuesday 13 May 2025 23:31:20 +0000 (0:00:00.359) 0:00:45.785 *********** 2025-05-13 23:31:20.303065 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8f56c737-ae06-5042-be62-d4d7430a3913', 'data_vg': 'ceph-8f56c737-ae06-5042-be62-d4d7430a3913'})  2025-05-13 23:31:20.303237 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b9ab4848-02bd-5b2a-a6cc-ded55503b6b3', 'data_vg': 'ceph-b9ab4848-02bd-5b2a-a6cc-ded55503b6b3'})  2025-05-13 23:31:20.303256 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:31:20.305213 | orchestrator | 2025-05-13 23:31:20.306902 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-05-13 23:31:20.307252 | orchestrator | Tuesday 13 May 2025 23:31:20 +0000 (0:00:00.147) 0:00:45.933 *********** 2025-05-13 23:31:20.456608 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8f56c737-ae06-5042-be62-d4d7430a3913', 'data_vg': 'ceph-8f56c737-ae06-5042-be62-d4d7430a3913'})  2025-05-13 23:31:20.456799 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b9ab4848-02bd-5b2a-a6cc-ded55503b6b3', 'data_vg': 'ceph-b9ab4848-02bd-5b2a-a6cc-ded55503b6b3'})  2025-05-13 23:31:20.457715 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:31:20.458644 | orchestrator | 2025-05-13 23:31:20.459221 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-05-13 23:31:20.459730 | orchestrator | Tuesday 13 May 2025 23:31:20 +0000 (0:00:00.151) 0:00:46.084 *********** 2025-05-13 23:31:20.587943 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8f56c737-ae06-5042-be62-d4d7430a3913', 'data_vg': 'ceph-8f56c737-ae06-5042-be62-d4d7430a3913'})  2025-05-13 23:31:20.588793 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b9ab4848-02bd-5b2a-a6cc-ded55503b6b3', 'data_vg': 'ceph-b9ab4848-02bd-5b2a-a6cc-ded55503b6b3'})  2025-05-13 23:31:20.589028 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:31:20.590724 | orchestrator | 2025-05-13 23:31:20.591627 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-05-13 23:31:20.592551 | orchestrator | Tuesday 13 May 2025 23:31:20 +0000 (0:00:00.132) 0:00:46.216 *********** 2025-05-13 23:31:20.751368 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8f56c737-ae06-5042-be62-d4d7430a3913', 'data_vg': 'ceph-8f56c737-ae06-5042-be62-d4d7430a3913'})  2025-05-13 23:31:20.752082 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b9ab4848-02bd-5b2a-a6cc-ded55503b6b3', 'data_vg': 'ceph-b9ab4848-02bd-5b2a-a6cc-ded55503b6b3'})  2025-05-13 23:31:20.752908 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:31:20.753547 | orchestrator | 2025-05-13 23:31:20.754696 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-05-13 23:31:20.755006 | orchestrator | Tuesday 13 May 2025 23:31:20 +0000 (0:00:00.162) 0:00:46.379 *********** 2025-05-13 23:31:21.232271 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:31:21.232382 | orchestrator | 2025-05-13 23:31:21.233177 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-05-13 23:31:21.233844 | orchestrator | Tuesday 13 May 2025 23:31:21 +0000 (0:00:00.482) 0:00:46.861 *********** 2025-05-13 23:31:21.736066 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:31:21.736938 | orchestrator | 2025-05-13 23:31:21.737540 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-05-13 23:31:21.738290 | orchestrator | Tuesday 13 May 2025 23:31:21 +0000 (0:00:00.503) 0:00:47.365 *********** 2025-05-13 23:31:21.865016 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:31:21.866329 | orchestrator | 2025-05-13 23:31:21.866572 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-05-13 23:31:21.867876 | orchestrator | Tuesday 13 May 2025 23:31:21 +0000 (0:00:00.128) 0:00:47.493 *********** 2025-05-13 23:31:22.027989 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-8f56c737-ae06-5042-be62-d4d7430a3913', 'vg_name': 'ceph-8f56c737-ae06-5042-be62-d4d7430a3913'}) 2025-05-13 23:31:22.028387 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-b9ab4848-02bd-5b2a-a6cc-ded55503b6b3', 'vg_name': 'ceph-b9ab4848-02bd-5b2a-a6cc-ded55503b6b3'}) 2025-05-13 23:31:22.029099 | orchestrator | 2025-05-13 23:31:22.029941 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-05-13 23:31:22.030763 | orchestrator | Tuesday 13 May 2025 23:31:22 +0000 (0:00:00.162) 0:00:47.656 *********** 2025-05-13 23:31:22.168229 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8f56c737-ae06-5042-be62-d4d7430a3913', 'data_vg': 'ceph-8f56c737-ae06-5042-be62-d4d7430a3913'})  2025-05-13 23:31:22.168869 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b9ab4848-02bd-5b2a-a6cc-ded55503b6b3', 'data_vg': 'ceph-b9ab4848-02bd-5b2a-a6cc-ded55503b6b3'})  2025-05-13 23:31:22.170285 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:31:22.170325 | orchestrator | 2025-05-13 23:31:22.170884 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-05-13 23:31:22.171570 | orchestrator | Tuesday 13 May 2025 23:31:22 +0000 (0:00:00.141) 0:00:47.798 *********** 2025-05-13 23:31:22.303649 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8f56c737-ae06-5042-be62-d4d7430a3913', 'data_vg': 'ceph-8f56c737-ae06-5042-be62-d4d7430a3913'})  2025-05-13 23:31:22.305016 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b9ab4848-02bd-5b2a-a6cc-ded55503b6b3', 'data_vg': 'ceph-b9ab4848-02bd-5b2a-a6cc-ded55503b6b3'})  2025-05-13 23:31:22.305931 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:31:22.306651 | orchestrator | 2025-05-13 23:31:22.307262 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-05-13 23:31:22.307856 | orchestrator | Tuesday 13 May 2025 23:31:22 +0000 (0:00:00.134) 0:00:47.933 *********** 2025-05-13 23:31:22.438645 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8f56c737-ae06-5042-be62-d4d7430a3913', 'data_vg': 'ceph-8f56c737-ae06-5042-be62-d4d7430a3913'})  2025-05-13 23:31:22.439353 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b9ab4848-02bd-5b2a-a6cc-ded55503b6b3', 'data_vg': 'ceph-b9ab4848-02bd-5b2a-a6cc-ded55503b6b3'})  2025-05-13 23:31:22.440456 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:31:22.443074 | orchestrator | 2025-05-13 23:31:22.443098 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-05-13 23:31:22.443106 | orchestrator | Tuesday 13 May 2025 23:31:22 +0000 (0:00:00.135) 0:00:48.068 *********** 2025-05-13 23:31:22.835059 | orchestrator | ok: [testbed-node-4] => { 2025-05-13 23:31:22.835346 | orchestrator |  "lvm_report": { 2025-05-13 23:31:22.835737 | orchestrator |  "lv": [ 2025-05-13 23:31:22.836869 | orchestrator |  { 2025-05-13 23:31:22.836895 | orchestrator |  "lv_name": "osd-block-8f56c737-ae06-5042-be62-d4d7430a3913", 2025-05-13 23:31:22.837251 | orchestrator |  "vg_name": "ceph-8f56c737-ae06-5042-be62-d4d7430a3913" 2025-05-13 23:31:22.837688 | orchestrator |  }, 2025-05-13 23:31:22.838189 | orchestrator |  { 2025-05-13 23:31:22.838413 | orchestrator |  "lv_name": "osd-block-b9ab4848-02bd-5b2a-a6cc-ded55503b6b3", 2025-05-13 23:31:22.838837 | orchestrator |  "vg_name": "ceph-b9ab4848-02bd-5b2a-a6cc-ded55503b6b3" 2025-05-13 23:31:22.839313 | orchestrator |  } 2025-05-13 23:31:22.839735 | orchestrator |  ], 2025-05-13 23:31:22.840321 | orchestrator |  "pv": [ 2025-05-13 23:31:22.840432 | orchestrator |  { 2025-05-13 23:31:22.840954 | orchestrator |  "pv_name": "/dev/sdb", 2025-05-13 23:31:22.841427 | orchestrator |  "vg_name": "ceph-8f56c737-ae06-5042-be62-d4d7430a3913" 2025-05-13 23:31:22.841919 | orchestrator |  }, 2025-05-13 23:31:22.842255 | orchestrator |  { 2025-05-13 23:31:22.842640 | orchestrator |  "pv_name": "/dev/sdc", 2025-05-13 23:31:22.843012 | orchestrator |  "vg_name": "ceph-b9ab4848-02bd-5b2a-a6cc-ded55503b6b3" 2025-05-13 23:31:22.843394 | orchestrator |  } 2025-05-13 23:31:22.843774 | orchestrator |  ] 2025-05-13 23:31:22.844174 | orchestrator |  } 2025-05-13 23:31:22.844505 | orchestrator | } 2025-05-13 23:31:22.844815 | orchestrator | 2025-05-13 23:31:22.845131 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-05-13 23:31:22.845441 | orchestrator | 2025-05-13 23:31:22.845794 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-13 23:31:22.846099 | orchestrator | Tuesday 13 May 2025 23:31:22 +0000 (0:00:00.396) 0:00:48.464 *********** 2025-05-13 23:31:23.048247 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-05-13 23:31:23.048601 | orchestrator | 2025-05-13 23:31:23.049332 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-13 23:31:23.050328 | orchestrator | Tuesday 13 May 2025 23:31:23 +0000 (0:00:00.212) 0:00:48.677 *********** 2025-05-13 23:31:23.266983 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:31:23.267087 | orchestrator | 2025-05-13 23:31:23.267622 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:31:23.268007 | orchestrator | Tuesday 13 May 2025 23:31:23 +0000 (0:00:00.219) 0:00:48.896 *********** 2025-05-13 23:31:23.656140 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-05-13 23:31:23.657512 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-05-13 23:31:23.659859 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-05-13 23:31:23.661869 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-05-13 23:31:23.662263 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-05-13 23:31:23.662926 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-05-13 23:31:23.662947 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-05-13 23:31:23.662959 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-05-13 23:31:23.663178 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-05-13 23:31:23.663536 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-05-13 23:31:23.663837 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-05-13 23:31:23.664287 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-05-13 23:31:23.664305 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-05-13 23:31:23.664355 | orchestrator | 2025-05-13 23:31:23.664859 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:31:23.664877 | orchestrator | Tuesday 13 May 2025 23:31:23 +0000 (0:00:00.388) 0:00:49.285 *********** 2025-05-13 23:31:23.830955 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:31:23.831370 | orchestrator | 2025-05-13 23:31:23.832051 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:31:23.832766 | orchestrator | Tuesday 13 May 2025 23:31:23 +0000 (0:00:00.173) 0:00:49.459 *********** 2025-05-13 23:31:24.008916 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:31:24.009637 | orchestrator | 2025-05-13 23:31:24.010651 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:31:24.011423 | orchestrator | Tuesday 13 May 2025 23:31:24 +0000 (0:00:00.178) 0:00:49.638 *********** 2025-05-13 23:31:24.202304 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:31:24.202825 | orchestrator | 2025-05-13 23:31:24.203558 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:31:24.204295 | orchestrator | Tuesday 13 May 2025 23:31:24 +0000 (0:00:00.193) 0:00:49.831 *********** 2025-05-13 23:31:24.383930 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:31:24.384712 | orchestrator | 2025-05-13 23:31:24.385131 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:31:24.386459 | orchestrator | Tuesday 13 May 2025 23:31:24 +0000 (0:00:00.180) 0:00:50.012 *********** 2025-05-13 23:31:24.582364 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:31:24.582536 | orchestrator | 2025-05-13 23:31:24.583072 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:31:24.583740 | orchestrator | Tuesday 13 May 2025 23:31:24 +0000 (0:00:00.199) 0:00:50.211 *********** 2025-05-13 23:31:25.197767 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:31:25.198100 | orchestrator | 2025-05-13 23:31:25.198618 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:31:25.198879 | orchestrator | Tuesday 13 May 2025 23:31:25 +0000 (0:00:00.614) 0:00:50.825 *********** 2025-05-13 23:31:25.402754 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:31:25.403249 | orchestrator | 2025-05-13 23:31:25.404387 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:31:25.404609 | orchestrator | Tuesday 13 May 2025 23:31:25 +0000 (0:00:00.206) 0:00:51.031 *********** 2025-05-13 23:31:25.604538 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:31:25.605183 | orchestrator | 2025-05-13 23:31:25.606317 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:31:25.606665 | orchestrator | Tuesday 13 May 2025 23:31:25 +0000 (0:00:00.200) 0:00:51.232 *********** 2025-05-13 23:31:26.010790 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_b255196f-0cab-4746-bd7d-248a31197f78) 2025-05-13 23:31:26.011239 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_b255196f-0cab-4746-bd7d-248a31197f78) 2025-05-13 23:31:26.012197 | orchestrator | 2025-05-13 23:31:26.013702 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:31:26.014228 | orchestrator | Tuesday 13 May 2025 23:31:26 +0000 (0:00:00.405) 0:00:51.637 *********** 2025-05-13 23:31:26.422823 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_61dae38b-1d40-412d-9df6-8d9734e6ced8) 2025-05-13 23:31:26.422928 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_61dae38b-1d40-412d-9df6-8d9734e6ced8) 2025-05-13 23:31:26.423734 | orchestrator | 2025-05-13 23:31:26.424420 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:31:26.424873 | orchestrator | Tuesday 13 May 2025 23:31:26 +0000 (0:00:00.413) 0:00:52.050 *********** 2025-05-13 23:31:26.858961 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_0aeac9b9-4df2-4d9e-975e-68588115061e) 2025-05-13 23:31:26.859128 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_0aeac9b9-4df2-4d9e-975e-68588115061e) 2025-05-13 23:31:26.859775 | orchestrator | 2025-05-13 23:31:26.861154 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:31:26.862298 | orchestrator | Tuesday 13 May 2025 23:31:26 +0000 (0:00:00.435) 0:00:52.486 *********** 2025-05-13 23:31:27.294953 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_55ed4948-9fe5-49ab-9e57-6f6f508ce8e3) 2025-05-13 23:31:27.297104 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_55ed4948-9fe5-49ab-9e57-6f6f508ce8e3) 2025-05-13 23:31:27.298506 | orchestrator | 2025-05-13 23:31:27.299207 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 23:31:27.299613 | orchestrator | Tuesday 13 May 2025 23:31:27 +0000 (0:00:00.436) 0:00:52.922 *********** 2025-05-13 23:31:27.630931 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-13 23:31:27.631390 | orchestrator | 2025-05-13 23:31:27.632837 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:31:27.634531 | orchestrator | Tuesday 13 May 2025 23:31:27 +0000 (0:00:00.336) 0:00:53.259 *********** 2025-05-13 23:31:28.051704 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-05-13 23:31:28.051874 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-05-13 23:31:28.054167 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-05-13 23:31:28.057125 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-05-13 23:31:28.058434 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-05-13 23:31:28.059309 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-05-13 23:31:28.060040 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-05-13 23:31:28.061107 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-05-13 23:31:28.062070 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-05-13 23:31:28.063003 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-05-13 23:31:28.064003 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-05-13 23:31:28.064857 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-05-13 23:31:28.065750 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-05-13 23:31:28.066618 | orchestrator | 2025-05-13 23:31:28.066809 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:31:28.067168 | orchestrator | Tuesday 13 May 2025 23:31:28 +0000 (0:00:00.418) 0:00:53.677 *********** 2025-05-13 23:31:28.302638 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:31:28.302744 | orchestrator | 2025-05-13 23:31:28.306131 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:31:28.306192 | orchestrator | Tuesday 13 May 2025 23:31:28 +0000 (0:00:00.253) 0:00:53.930 *********** 2025-05-13 23:31:28.481717 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:31:28.484886 | orchestrator | 2025-05-13 23:31:28.485094 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:31:28.486001 | orchestrator | Tuesday 13 May 2025 23:31:28 +0000 (0:00:00.179) 0:00:54.109 *********** 2025-05-13 23:31:29.135888 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:31:29.136940 | orchestrator | 2025-05-13 23:31:29.138579 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:31:29.138757 | orchestrator | Tuesday 13 May 2025 23:31:29 +0000 (0:00:00.647) 0:00:54.757 *********** 2025-05-13 23:31:29.360058 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:31:29.362645 | orchestrator | 2025-05-13 23:31:29.362699 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:31:29.362713 | orchestrator | Tuesday 13 May 2025 23:31:29 +0000 (0:00:00.231) 0:00:54.989 *********** 2025-05-13 23:31:29.552617 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:31:29.552863 | orchestrator | 2025-05-13 23:31:29.553808 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:31:29.554617 | orchestrator | Tuesday 13 May 2025 23:31:29 +0000 (0:00:00.192) 0:00:55.181 *********** 2025-05-13 23:31:29.768665 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:31:29.768824 | orchestrator | 2025-05-13 23:31:29.768913 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:31:29.769863 | orchestrator | Tuesday 13 May 2025 23:31:29 +0000 (0:00:00.215) 0:00:55.397 *********** 2025-05-13 23:31:30.002464 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:31:30.002735 | orchestrator | 2025-05-13 23:31:30.004754 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:31:30.005779 | orchestrator | Tuesday 13 May 2025 23:31:29 +0000 (0:00:00.232) 0:00:55.629 *********** 2025-05-13 23:31:30.238622 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:31:30.239399 | orchestrator | 2025-05-13 23:31:30.240536 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:31:30.240618 | orchestrator | Tuesday 13 May 2025 23:31:30 +0000 (0:00:00.237) 0:00:55.867 *********** 2025-05-13 23:31:30.900391 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-05-13 23:31:30.901244 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-05-13 23:31:30.902518 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-05-13 23:31:30.904289 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-05-13 23:31:30.905300 | orchestrator | 2025-05-13 23:31:30.906172 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:31:30.907084 | orchestrator | Tuesday 13 May 2025 23:31:30 +0000 (0:00:00.661) 0:00:56.529 *********** 2025-05-13 23:31:31.099129 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:31:31.099337 | orchestrator | 2025-05-13 23:31:31.100140 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:31:31.100667 | orchestrator | Tuesday 13 May 2025 23:31:31 +0000 (0:00:00.198) 0:00:56.727 *********** 2025-05-13 23:31:31.308967 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:31:31.310109 | orchestrator | 2025-05-13 23:31:31.312006 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:31:31.312301 | orchestrator | Tuesday 13 May 2025 23:31:31 +0000 (0:00:00.209) 0:00:56.936 *********** 2025-05-13 23:31:31.514205 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:31:31.514728 | orchestrator | 2025-05-13 23:31:31.515656 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 23:31:31.516107 | orchestrator | Tuesday 13 May 2025 23:31:31 +0000 (0:00:00.205) 0:00:57.142 *********** 2025-05-13 23:31:31.705877 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:31:31.706122 | orchestrator | 2025-05-13 23:31:31.707191 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-05-13 23:31:31.708353 | orchestrator | Tuesday 13 May 2025 23:31:31 +0000 (0:00:00.191) 0:00:57.333 *********** 2025-05-13 23:31:32.038261 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:31:32.039579 | orchestrator | 2025-05-13 23:31:32.040923 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-05-13 23:31:32.041668 | orchestrator | Tuesday 13 May 2025 23:31:32 +0000 (0:00:00.332) 0:00:57.665 *********** 2025-05-13 23:31:32.248307 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '53cfcf66-6862-5829-a71b-dc902cfbd9df'}}) 2025-05-13 23:31:32.249015 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd153f4c4-5597-54b4-b460-41e490b92c19'}}) 2025-05-13 23:31:32.250417 | orchestrator | 2025-05-13 23:31:32.251203 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-05-13 23:31:32.252017 | orchestrator | Tuesday 13 May 2025 23:31:32 +0000 (0:00:00.211) 0:00:57.877 *********** 2025-05-13 23:31:34.114866 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-53cfcf66-6862-5829-a71b-dc902cfbd9df', 'data_vg': 'ceph-53cfcf66-6862-5829-a71b-dc902cfbd9df'}) 2025-05-13 23:31:34.115701 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-d153f4c4-5597-54b4-b460-41e490b92c19', 'data_vg': 'ceph-d153f4c4-5597-54b4-b460-41e490b92c19'}) 2025-05-13 23:31:34.119594 | orchestrator | 2025-05-13 23:31:34.119624 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-05-13 23:31:34.121819 | orchestrator | Tuesday 13 May 2025 23:31:34 +0000 (0:00:01.864) 0:00:59.742 *********** 2025-05-13 23:31:34.272609 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-53cfcf66-6862-5829-a71b-dc902cfbd9df', 'data_vg': 'ceph-53cfcf66-6862-5829-a71b-dc902cfbd9df'})  2025-05-13 23:31:34.272782 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d153f4c4-5597-54b4-b460-41e490b92c19', 'data_vg': 'ceph-d153f4c4-5597-54b4-b460-41e490b92c19'})  2025-05-13 23:31:34.273575 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:31:34.273603 | orchestrator | 2025-05-13 23:31:34.274153 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-05-13 23:31:34.274473 | orchestrator | Tuesday 13 May 2025 23:31:34 +0000 (0:00:00.159) 0:00:59.902 *********** 2025-05-13 23:31:35.651939 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-53cfcf66-6862-5829-a71b-dc902cfbd9df', 'data_vg': 'ceph-53cfcf66-6862-5829-a71b-dc902cfbd9df'}) 2025-05-13 23:31:35.652883 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-d153f4c4-5597-54b4-b460-41e490b92c19', 'data_vg': 'ceph-d153f4c4-5597-54b4-b460-41e490b92c19'}) 2025-05-13 23:31:35.653816 | orchestrator | 2025-05-13 23:31:35.654451 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-05-13 23:31:35.656743 | orchestrator | Tuesday 13 May 2025 23:31:35 +0000 (0:00:01.376) 0:01:01.279 *********** 2025-05-13 23:31:35.799739 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-53cfcf66-6862-5829-a71b-dc902cfbd9df', 'data_vg': 'ceph-53cfcf66-6862-5829-a71b-dc902cfbd9df'})  2025-05-13 23:31:35.800830 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d153f4c4-5597-54b4-b460-41e490b92c19', 'data_vg': 'ceph-d153f4c4-5597-54b4-b460-41e490b92c19'})  2025-05-13 23:31:35.801721 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:31:35.802974 | orchestrator | 2025-05-13 23:31:35.803891 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-05-13 23:31:35.805183 | orchestrator | Tuesday 13 May 2025 23:31:35 +0000 (0:00:00.149) 0:01:01.428 *********** 2025-05-13 23:31:35.945028 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:31:35.946491 | orchestrator | 2025-05-13 23:31:35.946956 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-05-13 23:31:35.947799 | orchestrator | Tuesday 13 May 2025 23:31:35 +0000 (0:00:00.144) 0:01:01.573 *********** 2025-05-13 23:31:36.098137 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-53cfcf66-6862-5829-a71b-dc902cfbd9df', 'data_vg': 'ceph-53cfcf66-6862-5829-a71b-dc902cfbd9df'})  2025-05-13 23:31:36.099051 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d153f4c4-5597-54b4-b460-41e490b92c19', 'data_vg': 'ceph-d153f4c4-5597-54b4-b460-41e490b92c19'})  2025-05-13 23:31:36.099891 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:31:36.100578 | orchestrator | 2025-05-13 23:31:36.102066 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-05-13 23:31:36.102519 | orchestrator | Tuesday 13 May 2025 23:31:36 +0000 (0:00:00.154) 0:01:01.727 *********** 2025-05-13 23:31:36.254456 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:31:36.255244 | orchestrator | 2025-05-13 23:31:36.256025 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-05-13 23:31:36.256550 | orchestrator | Tuesday 13 May 2025 23:31:36 +0000 (0:00:00.153) 0:01:01.880 *********** 2025-05-13 23:31:36.405083 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-53cfcf66-6862-5829-a71b-dc902cfbd9df', 'data_vg': 'ceph-53cfcf66-6862-5829-a71b-dc902cfbd9df'})  2025-05-13 23:31:36.405769 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d153f4c4-5597-54b4-b460-41e490b92c19', 'data_vg': 'ceph-d153f4c4-5597-54b4-b460-41e490b92c19'})  2025-05-13 23:31:36.406156 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:31:36.406291 | orchestrator | 2025-05-13 23:31:36.406841 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-05-13 23:31:36.407133 | orchestrator | Tuesday 13 May 2025 23:31:36 +0000 (0:00:00.153) 0:01:02.034 *********** 2025-05-13 23:31:36.541122 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:31:36.541227 | orchestrator | 2025-05-13 23:31:36.546850 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-05-13 23:31:36.547697 | orchestrator | Tuesday 13 May 2025 23:31:36 +0000 (0:00:00.131) 0:01:02.166 *********** 2025-05-13 23:31:36.686451 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-53cfcf66-6862-5829-a71b-dc902cfbd9df', 'data_vg': 'ceph-53cfcf66-6862-5829-a71b-dc902cfbd9df'})  2025-05-13 23:31:36.687057 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d153f4c4-5597-54b4-b460-41e490b92c19', 'data_vg': 'ceph-d153f4c4-5597-54b4-b460-41e490b92c19'})  2025-05-13 23:31:36.688052 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:31:36.688943 | orchestrator | 2025-05-13 23:31:36.689951 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-05-13 23:31:36.690632 | orchestrator | Tuesday 13 May 2025 23:31:36 +0000 (0:00:00.149) 0:01:02.315 *********** 2025-05-13 23:31:37.037990 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:31:37.038314 | orchestrator | 2025-05-13 23:31:37.039132 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-05-13 23:31:37.040314 | orchestrator | Tuesday 13 May 2025 23:31:37 +0000 (0:00:00.351) 0:01:02.666 *********** 2025-05-13 23:31:37.197636 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-53cfcf66-6862-5829-a71b-dc902cfbd9df', 'data_vg': 'ceph-53cfcf66-6862-5829-a71b-dc902cfbd9df'})  2025-05-13 23:31:37.198237 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d153f4c4-5597-54b4-b460-41e490b92c19', 'data_vg': 'ceph-d153f4c4-5597-54b4-b460-41e490b92c19'})  2025-05-13 23:31:37.199206 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:31:37.199880 | orchestrator | 2025-05-13 23:31:37.201029 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-05-13 23:31:37.201443 | orchestrator | Tuesday 13 May 2025 23:31:37 +0000 (0:00:00.158) 0:01:02.825 *********** 2025-05-13 23:31:37.340309 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-53cfcf66-6862-5829-a71b-dc902cfbd9df', 'data_vg': 'ceph-53cfcf66-6862-5829-a71b-dc902cfbd9df'})  2025-05-13 23:31:37.341433 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d153f4c4-5597-54b4-b460-41e490b92c19', 'data_vg': 'ceph-d153f4c4-5597-54b4-b460-41e490b92c19'})  2025-05-13 23:31:37.342877 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:31:37.343743 | orchestrator | 2025-05-13 23:31:37.344355 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-05-13 23:31:37.345083 | orchestrator | Tuesday 13 May 2025 23:31:37 +0000 (0:00:00.142) 0:01:02.968 *********** 2025-05-13 23:31:37.498006 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-53cfcf66-6862-5829-a71b-dc902cfbd9df', 'data_vg': 'ceph-53cfcf66-6862-5829-a71b-dc902cfbd9df'})  2025-05-13 23:31:37.498167 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d153f4c4-5597-54b4-b460-41e490b92c19', 'data_vg': 'ceph-d153f4c4-5597-54b4-b460-41e490b92c19'})  2025-05-13 23:31:37.498806 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:31:37.499760 | orchestrator | 2025-05-13 23:31:37.501178 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-05-13 23:31:37.501212 | orchestrator | Tuesday 13 May 2025 23:31:37 +0000 (0:00:00.157) 0:01:03.126 *********** 2025-05-13 23:31:37.638358 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:31:37.638490 | orchestrator | 2025-05-13 23:31:37.639117 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-05-13 23:31:37.639141 | orchestrator | Tuesday 13 May 2025 23:31:37 +0000 (0:00:00.141) 0:01:03.267 *********** 2025-05-13 23:31:37.779687 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:31:37.781476 | orchestrator | 2025-05-13 23:31:37.782706 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-05-13 23:31:37.783648 | orchestrator | Tuesday 13 May 2025 23:31:37 +0000 (0:00:00.140) 0:01:03.408 *********** 2025-05-13 23:31:37.915748 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:31:37.915990 | orchestrator | 2025-05-13 23:31:37.916659 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-05-13 23:31:37.917336 | orchestrator | Tuesday 13 May 2025 23:31:37 +0000 (0:00:00.136) 0:01:03.544 *********** 2025-05-13 23:31:38.055438 | orchestrator | ok: [testbed-node-5] => { 2025-05-13 23:31:38.060957 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-05-13 23:31:38.061563 | orchestrator | } 2025-05-13 23:31:38.063635 | orchestrator | 2025-05-13 23:31:38.064493 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-05-13 23:31:38.065600 | orchestrator | Tuesday 13 May 2025 23:31:38 +0000 (0:00:00.139) 0:01:03.683 *********** 2025-05-13 23:31:38.194319 | orchestrator | ok: [testbed-node-5] => { 2025-05-13 23:31:38.195442 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-05-13 23:31:38.195644 | orchestrator | } 2025-05-13 23:31:38.197087 | orchestrator | 2025-05-13 23:31:38.198219 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-05-13 23:31:38.199945 | orchestrator | Tuesday 13 May 2025 23:31:38 +0000 (0:00:00.139) 0:01:03.823 *********** 2025-05-13 23:31:38.335556 | orchestrator | ok: [testbed-node-5] => { 2025-05-13 23:31:38.335789 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-05-13 23:31:38.337678 | orchestrator | } 2025-05-13 23:31:38.338376 | orchestrator | 2025-05-13 23:31:38.338946 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-05-13 23:31:38.339912 | orchestrator | Tuesday 13 May 2025 23:31:38 +0000 (0:00:00.140) 0:01:03.963 *********** 2025-05-13 23:31:38.852199 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:31:38.852365 | orchestrator | 2025-05-13 23:31:38.853692 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-05-13 23:31:38.854590 | orchestrator | Tuesday 13 May 2025 23:31:38 +0000 (0:00:00.516) 0:01:04.480 *********** 2025-05-13 23:31:39.384426 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:31:39.385367 | orchestrator | 2025-05-13 23:31:39.386411 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-05-13 23:31:39.387062 | orchestrator | Tuesday 13 May 2025 23:31:39 +0000 (0:00:00.531) 0:01:05.011 *********** 2025-05-13 23:31:40.140216 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:31:40.140915 | orchestrator | 2025-05-13 23:31:40.141272 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-05-13 23:31:40.142813 | orchestrator | Tuesday 13 May 2025 23:31:40 +0000 (0:00:00.757) 0:01:05.769 *********** 2025-05-13 23:31:40.294898 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:31:40.294998 | orchestrator | 2025-05-13 23:31:40.295872 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-05-13 23:31:40.295955 | orchestrator | Tuesday 13 May 2025 23:31:40 +0000 (0:00:00.152) 0:01:05.922 *********** 2025-05-13 23:31:40.425207 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:31:40.426055 | orchestrator | 2025-05-13 23:31:40.427436 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-05-13 23:31:40.428504 | orchestrator | Tuesday 13 May 2025 23:31:40 +0000 (0:00:00.130) 0:01:06.053 *********** 2025-05-13 23:31:40.534349 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:31:40.534715 | orchestrator | 2025-05-13 23:31:40.535476 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-05-13 23:31:40.540344 | orchestrator | Tuesday 13 May 2025 23:31:40 +0000 (0:00:00.108) 0:01:06.162 *********** 2025-05-13 23:31:40.663274 | orchestrator | ok: [testbed-node-5] => { 2025-05-13 23:31:40.664033 | orchestrator |  "vgs_report": { 2025-05-13 23:31:40.665171 | orchestrator |  "vg": [] 2025-05-13 23:31:40.667447 | orchestrator |  } 2025-05-13 23:31:40.667639 | orchestrator | } 2025-05-13 23:31:40.668727 | orchestrator | 2025-05-13 23:31:40.669601 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-05-13 23:31:40.670200 | orchestrator | Tuesday 13 May 2025 23:31:40 +0000 (0:00:00.129) 0:01:06.291 *********** 2025-05-13 23:31:40.807822 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:31:40.808321 | orchestrator | 2025-05-13 23:31:40.809745 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-05-13 23:31:40.809771 | orchestrator | Tuesday 13 May 2025 23:31:40 +0000 (0:00:00.144) 0:01:06.436 *********** 2025-05-13 23:31:40.932581 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:31:40.932685 | orchestrator | 2025-05-13 23:31:40.933351 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-05-13 23:31:40.933573 | orchestrator | Tuesday 13 May 2025 23:31:40 +0000 (0:00:00.125) 0:01:06.562 *********** 2025-05-13 23:31:41.054346 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:31:41.054625 | orchestrator | 2025-05-13 23:31:41.054737 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-05-13 23:31:41.055158 | orchestrator | Tuesday 13 May 2025 23:31:41 +0000 (0:00:00.122) 0:01:06.684 *********** 2025-05-13 23:31:41.166403 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:31:41.166575 | orchestrator | 2025-05-13 23:31:41.167110 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-05-13 23:31:41.169047 | orchestrator | Tuesday 13 May 2025 23:31:41 +0000 (0:00:00.112) 0:01:06.796 *********** 2025-05-13 23:31:41.294339 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:31:41.294446 | orchestrator | 2025-05-13 23:31:41.294841 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-05-13 23:31:41.295443 | orchestrator | Tuesday 13 May 2025 23:31:41 +0000 (0:00:00.125) 0:01:06.922 *********** 2025-05-13 23:31:41.426347 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:31:41.426595 | orchestrator | 2025-05-13 23:31:41.427182 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-05-13 23:31:41.427901 | orchestrator | Tuesday 13 May 2025 23:31:41 +0000 (0:00:00.130) 0:01:07.052 *********** 2025-05-13 23:31:41.549295 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:31:41.549960 | orchestrator | 2025-05-13 23:31:41.550403 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-05-13 23:31:41.550924 | orchestrator | Tuesday 13 May 2025 23:31:41 +0000 (0:00:00.126) 0:01:07.179 *********** 2025-05-13 23:31:41.668670 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:31:41.670276 | orchestrator | 2025-05-13 23:31:41.671279 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-05-13 23:31:41.671500 | orchestrator | Tuesday 13 May 2025 23:31:41 +0000 (0:00:00.117) 0:01:07.296 *********** 2025-05-13 23:31:41.957450 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:31:41.958078 | orchestrator | 2025-05-13 23:31:41.958925 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-05-13 23:31:41.959666 | orchestrator | Tuesday 13 May 2025 23:31:41 +0000 (0:00:00.290) 0:01:07.586 *********** 2025-05-13 23:31:42.095832 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:31:42.096241 | orchestrator | 2025-05-13 23:31:42.096633 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-05-13 23:31:42.096955 | orchestrator | Tuesday 13 May 2025 23:31:42 +0000 (0:00:00.139) 0:01:07.725 *********** 2025-05-13 23:31:42.218235 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:31:42.219048 | orchestrator | 2025-05-13 23:31:42.220730 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-05-13 23:31:42.221329 | orchestrator | Tuesday 13 May 2025 23:31:42 +0000 (0:00:00.121) 0:01:07.847 *********** 2025-05-13 23:31:42.337473 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:31:42.337718 | orchestrator | 2025-05-13 23:31:42.338309 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-05-13 23:31:42.339029 | orchestrator | Tuesday 13 May 2025 23:31:42 +0000 (0:00:00.117) 0:01:07.964 *********** 2025-05-13 23:31:42.465637 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:31:42.465848 | orchestrator | 2025-05-13 23:31:42.466640 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-05-13 23:31:42.467463 | orchestrator | Tuesday 13 May 2025 23:31:42 +0000 (0:00:00.130) 0:01:08.094 *********** 2025-05-13 23:31:42.601304 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:31:42.601491 | orchestrator | 2025-05-13 23:31:42.602602 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-05-13 23:31:42.603326 | orchestrator | Tuesday 13 May 2025 23:31:42 +0000 (0:00:00.135) 0:01:08.230 *********** 2025-05-13 23:31:42.742458 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-53cfcf66-6862-5829-a71b-dc902cfbd9df', 'data_vg': 'ceph-53cfcf66-6862-5829-a71b-dc902cfbd9df'})  2025-05-13 23:31:42.744804 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d153f4c4-5597-54b4-b460-41e490b92c19', 'data_vg': 'ceph-d153f4c4-5597-54b4-b460-41e490b92c19'})  2025-05-13 23:31:42.745619 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:31:42.746396 | orchestrator | 2025-05-13 23:31:42.747539 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-05-13 23:31:42.748845 | orchestrator | Tuesday 13 May 2025 23:31:42 +0000 (0:00:00.141) 0:01:08.372 *********** 2025-05-13 23:31:42.881950 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-53cfcf66-6862-5829-a71b-dc902cfbd9df', 'data_vg': 'ceph-53cfcf66-6862-5829-a71b-dc902cfbd9df'})  2025-05-13 23:31:42.883025 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d153f4c4-5597-54b4-b460-41e490b92c19', 'data_vg': 'ceph-d153f4c4-5597-54b4-b460-41e490b92c19'})  2025-05-13 23:31:42.883944 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:31:42.884591 | orchestrator | 2025-05-13 23:31:42.885582 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-05-13 23:31:42.886390 | orchestrator | Tuesday 13 May 2025 23:31:42 +0000 (0:00:00.138) 0:01:08.510 *********** 2025-05-13 23:31:43.011658 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-53cfcf66-6862-5829-a71b-dc902cfbd9df', 'data_vg': 'ceph-53cfcf66-6862-5829-a71b-dc902cfbd9df'})  2025-05-13 23:31:43.012384 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d153f4c4-5597-54b4-b460-41e490b92c19', 'data_vg': 'ceph-d153f4c4-5597-54b4-b460-41e490b92c19'})  2025-05-13 23:31:43.013035 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:31:43.013541 | orchestrator | 2025-05-13 23:31:43.014192 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-05-13 23:31:43.014844 | orchestrator | Tuesday 13 May 2025 23:31:43 +0000 (0:00:00.130) 0:01:08.641 *********** 2025-05-13 23:31:43.163297 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-53cfcf66-6862-5829-a71b-dc902cfbd9df', 'data_vg': 'ceph-53cfcf66-6862-5829-a71b-dc902cfbd9df'})  2025-05-13 23:31:43.163621 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d153f4c4-5597-54b4-b460-41e490b92c19', 'data_vg': 'ceph-d153f4c4-5597-54b4-b460-41e490b92c19'})  2025-05-13 23:31:43.164572 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:31:43.165090 | orchestrator | 2025-05-13 23:31:43.165835 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-05-13 23:31:43.166146 | orchestrator | Tuesday 13 May 2025 23:31:43 +0000 (0:00:00.151) 0:01:08.792 *********** 2025-05-13 23:31:43.310944 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-53cfcf66-6862-5829-a71b-dc902cfbd9df', 'data_vg': 'ceph-53cfcf66-6862-5829-a71b-dc902cfbd9df'})  2025-05-13 23:31:43.311735 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d153f4c4-5597-54b4-b460-41e490b92c19', 'data_vg': 'ceph-d153f4c4-5597-54b4-b460-41e490b92c19'})  2025-05-13 23:31:43.312643 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:31:43.314285 | orchestrator | 2025-05-13 23:31:43.314759 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-05-13 23:31:43.315291 | orchestrator | Tuesday 13 May 2025 23:31:43 +0000 (0:00:00.146) 0:01:08.939 *********** 2025-05-13 23:31:43.432234 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-53cfcf66-6862-5829-a71b-dc902cfbd9df', 'data_vg': 'ceph-53cfcf66-6862-5829-a71b-dc902cfbd9df'})  2025-05-13 23:31:43.432299 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d153f4c4-5597-54b4-b460-41e490b92c19', 'data_vg': 'ceph-d153f4c4-5597-54b4-b460-41e490b92c19'})  2025-05-13 23:31:43.432438 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:31:43.433207 | orchestrator | 2025-05-13 23:31:43.433586 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-05-13 23:31:43.433971 | orchestrator | Tuesday 13 May 2025 23:31:43 +0000 (0:00:00.122) 0:01:09.062 *********** 2025-05-13 23:31:43.724420 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-53cfcf66-6862-5829-a71b-dc902cfbd9df', 'data_vg': 'ceph-53cfcf66-6862-5829-a71b-dc902cfbd9df'})  2025-05-13 23:31:43.724582 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d153f4c4-5597-54b4-b460-41e490b92c19', 'data_vg': 'ceph-d153f4c4-5597-54b4-b460-41e490b92c19'})  2025-05-13 23:31:43.726080 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:31:43.726627 | orchestrator | 2025-05-13 23:31:43.727644 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-05-13 23:31:43.729717 | orchestrator | Tuesday 13 May 2025 23:31:43 +0000 (0:00:00.291) 0:01:09.354 *********** 2025-05-13 23:31:43.875553 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-53cfcf66-6862-5829-a71b-dc902cfbd9df', 'data_vg': 'ceph-53cfcf66-6862-5829-a71b-dc902cfbd9df'})  2025-05-13 23:31:43.876425 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d153f4c4-5597-54b4-b460-41e490b92c19', 'data_vg': 'ceph-d153f4c4-5597-54b4-b460-41e490b92c19'})  2025-05-13 23:31:43.877179 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:31:43.878337 | orchestrator | 2025-05-13 23:31:43.878984 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-05-13 23:31:43.879557 | orchestrator | Tuesday 13 May 2025 23:31:43 +0000 (0:00:00.150) 0:01:09.504 *********** 2025-05-13 23:31:44.412458 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:31:44.413511 | orchestrator | 2025-05-13 23:31:44.413583 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-05-13 23:31:44.413849 | orchestrator | Tuesday 13 May 2025 23:31:44 +0000 (0:00:00.533) 0:01:10.038 *********** 2025-05-13 23:31:44.952306 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:31:44.952415 | orchestrator | 2025-05-13 23:31:44.953124 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-05-13 23:31:44.954274 | orchestrator | Tuesday 13 May 2025 23:31:44 +0000 (0:00:00.541) 0:01:10.580 *********** 2025-05-13 23:31:45.139934 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:31:45.140667 | orchestrator | 2025-05-13 23:31:45.141401 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-05-13 23:31:45.142363 | orchestrator | Tuesday 13 May 2025 23:31:45 +0000 (0:00:00.188) 0:01:10.769 *********** 2025-05-13 23:31:45.308207 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-53cfcf66-6862-5829-a71b-dc902cfbd9df', 'vg_name': 'ceph-53cfcf66-6862-5829-a71b-dc902cfbd9df'}) 2025-05-13 23:31:45.309338 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-d153f4c4-5597-54b4-b460-41e490b92c19', 'vg_name': 'ceph-d153f4c4-5597-54b4-b460-41e490b92c19'}) 2025-05-13 23:31:45.310629 | orchestrator | 2025-05-13 23:31:45.312075 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-05-13 23:31:45.312758 | orchestrator | Tuesday 13 May 2025 23:31:45 +0000 (0:00:00.166) 0:01:10.936 *********** 2025-05-13 23:31:45.471994 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-53cfcf66-6862-5829-a71b-dc902cfbd9df', 'data_vg': 'ceph-53cfcf66-6862-5829-a71b-dc902cfbd9df'})  2025-05-13 23:31:45.473118 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d153f4c4-5597-54b4-b460-41e490b92c19', 'data_vg': 'ceph-d153f4c4-5597-54b4-b460-41e490b92c19'})  2025-05-13 23:31:45.475657 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:31:45.475691 | orchestrator | 2025-05-13 23:31:45.477329 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-05-13 23:31:45.477353 | orchestrator | Tuesday 13 May 2025 23:31:45 +0000 (0:00:00.163) 0:01:11.099 *********** 2025-05-13 23:31:45.623732 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-53cfcf66-6862-5829-a71b-dc902cfbd9df', 'data_vg': 'ceph-53cfcf66-6862-5829-a71b-dc902cfbd9df'})  2025-05-13 23:31:45.626080 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d153f4c4-5597-54b4-b460-41e490b92c19', 'data_vg': 'ceph-d153f4c4-5597-54b4-b460-41e490b92c19'})  2025-05-13 23:31:45.626723 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:31:45.627796 | orchestrator | 2025-05-13 23:31:45.628082 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-05-13 23:31:45.628866 | orchestrator | Tuesday 13 May 2025 23:31:45 +0000 (0:00:00.152) 0:01:11.252 *********** 2025-05-13 23:31:45.787012 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-53cfcf66-6862-5829-a71b-dc902cfbd9df', 'data_vg': 'ceph-53cfcf66-6862-5829-a71b-dc902cfbd9df'})  2025-05-13 23:31:45.788069 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d153f4c4-5597-54b4-b460-41e490b92c19', 'data_vg': 'ceph-d153f4c4-5597-54b4-b460-41e490b92c19'})  2025-05-13 23:31:45.788803 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:31:45.789663 | orchestrator | 2025-05-13 23:31:45.790294 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-05-13 23:31:45.790986 | orchestrator | Tuesday 13 May 2025 23:31:45 +0000 (0:00:00.163) 0:01:11.415 *********** 2025-05-13 23:31:45.933029 | orchestrator | ok: [testbed-node-5] => { 2025-05-13 23:31:45.933140 | orchestrator |  "lvm_report": { 2025-05-13 23:31:45.933736 | orchestrator |  "lv": [ 2025-05-13 23:31:45.934829 | orchestrator |  { 2025-05-13 23:31:45.936439 | orchestrator |  "lv_name": "osd-block-53cfcf66-6862-5829-a71b-dc902cfbd9df", 2025-05-13 23:31:45.936858 | orchestrator |  "vg_name": "ceph-53cfcf66-6862-5829-a71b-dc902cfbd9df" 2025-05-13 23:31:45.938076 | orchestrator |  }, 2025-05-13 23:31:45.939190 | orchestrator |  { 2025-05-13 23:31:45.940096 | orchestrator |  "lv_name": "osd-block-d153f4c4-5597-54b4-b460-41e490b92c19", 2025-05-13 23:31:45.941098 | orchestrator |  "vg_name": "ceph-d153f4c4-5597-54b4-b460-41e490b92c19" 2025-05-13 23:31:45.941829 | orchestrator |  } 2025-05-13 23:31:45.942723 | orchestrator |  ], 2025-05-13 23:31:45.943119 | orchestrator |  "pv": [ 2025-05-13 23:31:45.943934 | orchestrator |  { 2025-05-13 23:31:45.944673 | orchestrator |  "pv_name": "/dev/sdb", 2025-05-13 23:31:45.945366 | orchestrator |  "vg_name": "ceph-53cfcf66-6862-5829-a71b-dc902cfbd9df" 2025-05-13 23:31:45.946111 | orchestrator |  }, 2025-05-13 23:31:45.947038 | orchestrator |  { 2025-05-13 23:31:45.947147 | orchestrator |  "pv_name": "/dev/sdc", 2025-05-13 23:31:45.947682 | orchestrator |  "vg_name": "ceph-d153f4c4-5597-54b4-b460-41e490b92c19" 2025-05-13 23:31:45.948192 | orchestrator |  } 2025-05-13 23:31:45.948630 | orchestrator |  ] 2025-05-13 23:31:45.949739 | orchestrator |  } 2025-05-13 23:31:45.950118 | orchestrator | } 2025-05-13 23:31:45.950225 | orchestrator | 2025-05-13 23:31:45.950912 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 23:31:45.951382 | orchestrator | 2025-05-13 23:31:45 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-13 23:31:45.951678 | orchestrator | 2025-05-13 23:31:45 | INFO  | Please wait and do not abort execution. 2025-05-13 23:31:45.952167 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-05-13 23:31:45.955040 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-05-13 23:31:45.955750 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-05-13 23:31:45.956710 | orchestrator | 2025-05-13 23:31:45.957434 | orchestrator | 2025-05-13 23:31:45.958115 | orchestrator | 2025-05-13 23:31:45.959160 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 23:31:45.960366 | orchestrator | Tuesday 13 May 2025 23:31:45 +0000 (0:00:00.145) 0:01:11.561 *********** 2025-05-13 23:31:45.960953 | orchestrator | =============================================================================== 2025-05-13 23:31:45.961596 | orchestrator | Create block VGs -------------------------------------------------------- 5.67s 2025-05-13 23:31:45.962200 | orchestrator | Create block LVs -------------------------------------------------------- 4.05s 2025-05-13 23:31:45.962684 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.90s 2025-05-13 23:31:45.963285 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.82s 2025-05-13 23:31:45.963885 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.56s 2025-05-13 23:31:45.964709 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.54s 2025-05-13 23:31:45.965382 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.54s 2025-05-13 23:31:45.966136 | orchestrator | Add known partitions to the list of available block devices ------------- 1.46s 2025-05-13 23:31:45.966855 | orchestrator | Add known links to the list of available block devices ------------------ 1.24s 2025-05-13 23:31:45.967324 | orchestrator | Add known partitions to the list of available block devices ------------- 1.07s 2025-05-13 23:31:45.968023 | orchestrator | Add known partitions to the list of available block devices ------------- 0.91s 2025-05-13 23:31:45.968543 | orchestrator | Add known links to the list of available block devices ------------------ 0.86s 2025-05-13 23:31:45.969228 | orchestrator | Print LVM report data --------------------------------------------------- 0.84s 2025-05-13 23:31:45.969857 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.73s 2025-05-13 23:31:45.970451 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.70s 2025-05-13 23:31:45.971075 | orchestrator | Get initial list of available block devices ----------------------------- 0.69s 2025-05-13 23:31:45.971629 | orchestrator | Create DB+WAL VGs ------------------------------------------------------- 0.67s 2025-05-13 23:31:45.972354 | orchestrator | Add known partitions to the list of available block devices ------------- 0.66s 2025-05-13 23:31:45.972924 | orchestrator | Print 'Create WAL LVs for ceph_wal_devices' ----------------------------- 0.66s 2025-05-13 23:31:45.973664 | orchestrator | Create DB LVs for ceph_db_devices --------------------------------------- 0.65s 2025-05-13 23:31:48.374634 | orchestrator | 2025-05-13 23:31:48 | INFO  | Task b1591b26-8afe-4b73-b09b-a8cfecceb956 (facts) was prepared for execution. 2025-05-13 23:31:48.374729 | orchestrator | 2025-05-13 23:31:48 | INFO  | It takes a moment until task b1591b26-8afe-4b73-b09b-a8cfecceb956 (facts) has been started and output is visible here. 2025-05-13 23:31:52.490255 | orchestrator | 2025-05-13 23:31:52.492704 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-05-13 23:31:52.493613 | orchestrator | 2025-05-13 23:31:52.496795 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-05-13 23:31:52.499201 | orchestrator | Tuesday 13 May 2025 23:31:52 +0000 (0:00:00.267) 0:00:00.267 *********** 2025-05-13 23:31:53.662998 | orchestrator | ok: [testbed-manager] 2025-05-13 23:31:53.663106 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:31:53.663129 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:31:53.663239 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:31:53.664162 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:31:53.664236 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:31:53.664737 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:31:53.667091 | orchestrator | 2025-05-13 23:31:53.668131 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-05-13 23:31:53.668813 | orchestrator | Tuesday 13 May 2025 23:31:53 +0000 (0:00:01.172) 0:00:01.440 *********** 2025-05-13 23:31:53.861216 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:31:53.943013 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:31:54.025740 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:31:54.104105 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:31:54.185259 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:31:54.920005 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:31:54.923426 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:31:54.923912 | orchestrator | 2025-05-13 23:31:54.923973 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-13 23:31:54.924012 | orchestrator | 2025-05-13 23:31:54.924118 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-13 23:31:54.924250 | orchestrator | Tuesday 13 May 2025 23:31:54 +0000 (0:00:01.259) 0:00:02.699 *********** 2025-05-13 23:31:59.958240 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:31:59.959142 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:31:59.959261 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:31:59.963247 | orchestrator | ok: [testbed-manager] 2025-05-13 23:31:59.964068 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:31:59.965058 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:31:59.966209 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:31:59.967463 | orchestrator | 2025-05-13 23:31:59.968688 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-05-13 23:31:59.969663 | orchestrator | 2025-05-13 23:31:59.970231 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-05-13 23:31:59.970909 | orchestrator | Tuesday 13 May 2025 23:31:59 +0000 (0:00:05.038) 0:00:07.738 *********** 2025-05-13 23:32:00.126821 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:32:00.205114 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:32:00.287277 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:32:00.386709 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:32:00.490122 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:32:00.529347 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:32:00.530400 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:32:00.533849 | orchestrator | 2025-05-13 23:32:00.536492 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 23:32:00.536542 | orchestrator | 2025-05-13 23:32:00 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-13 23:32:00.536611 | orchestrator | 2025-05-13 23:32:00 | INFO  | Please wait and do not abort execution. 2025-05-13 23:32:00.538158 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-13 23:32:00.539469 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-13 23:32:00.540507 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-13 23:32:00.541575 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-13 23:32:00.544712 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-13 23:32:00.545407 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-13 23:32:00.546641 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-13 23:32:00.547748 | orchestrator | 2025-05-13 23:32:00.548319 | orchestrator | 2025-05-13 23:32:00.550636 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 23:32:00.551306 | orchestrator | Tuesday 13 May 2025 23:32:00 +0000 (0:00:00.573) 0:00:08.312 *********** 2025-05-13 23:32:00.552194 | orchestrator | =============================================================================== 2025-05-13 23:32:00.552594 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.04s 2025-05-13 23:32:00.553659 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.26s 2025-05-13 23:32:00.554640 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.17s 2025-05-13 23:32:00.555472 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.57s 2025-05-13 23:32:01.240408 | orchestrator | 2025-05-13 23:32:01.243208 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Tue May 13 23:32:01 UTC 2025 2025-05-13 23:32:01.243246 | orchestrator | 2025-05-13 23:32:02.961192 | orchestrator | 2025-05-13 23:32:02 | INFO  | Collection nutshell is prepared for execution 2025-05-13 23:32:02.961321 | orchestrator | 2025-05-13 23:32:02 | INFO  | D [0] - dotfiles 2025-05-13 23:32:02.966668 | orchestrator | 2025-05-13 23:32:02 | INFO  | D [0] - homer 2025-05-13 23:32:02.966737 | orchestrator | 2025-05-13 23:32:02 | INFO  | D [0] - netdata 2025-05-13 23:32:02.966787 | orchestrator | 2025-05-13 23:32:02 | INFO  | D [0] - openstackclient 2025-05-13 23:32:02.966808 | orchestrator | 2025-05-13 23:32:02 | INFO  | D [0] - phpmyadmin 2025-05-13 23:32:02.966829 | orchestrator | 2025-05-13 23:32:02 | INFO  | A [0] - common 2025-05-13 23:32:02.968610 | orchestrator | 2025-05-13 23:32:02 | INFO  | A [1] -- loadbalancer 2025-05-13 23:32:02.968667 | orchestrator | 2025-05-13 23:32:02 | INFO  | D [2] --- opensearch 2025-05-13 23:32:02.968688 | orchestrator | 2025-05-13 23:32:02 | INFO  | A [2] --- mariadb-ng 2025-05-13 23:32:02.968707 | orchestrator | 2025-05-13 23:32:02 | INFO  | D [3] ---- horizon 2025-05-13 23:32:02.968877 | orchestrator | 2025-05-13 23:32:02 | INFO  | A [3] ---- keystone 2025-05-13 23:32:02.968901 | orchestrator | 2025-05-13 23:32:02 | INFO  | A [4] ----- neutron 2025-05-13 23:32:02.968920 | orchestrator | 2025-05-13 23:32:02 | INFO  | D [5] ------ wait-for-nova 2025-05-13 23:32:02.968941 | orchestrator | 2025-05-13 23:32:02 | INFO  | A [5] ------ octavia 2025-05-13 23:32:02.969426 | orchestrator | 2025-05-13 23:32:02 | INFO  | D [4] ----- barbican 2025-05-13 23:32:02.969465 | orchestrator | 2025-05-13 23:32:02 | INFO  | D [4] ----- designate 2025-05-13 23:32:02.969486 | orchestrator | 2025-05-13 23:32:02 | INFO  | D [4] ----- ironic 2025-05-13 23:32:02.969976 | orchestrator | 2025-05-13 23:32:02 | INFO  | D [4] ----- placement 2025-05-13 23:32:02.970241 | orchestrator | 2025-05-13 23:32:02 | INFO  | D [4] ----- magnum 2025-05-13 23:32:02.970418 | orchestrator | 2025-05-13 23:32:02 | INFO  | A [1] -- openvswitch 2025-05-13 23:32:02.970446 | orchestrator | 2025-05-13 23:32:02 | INFO  | D [2] --- ovn 2025-05-13 23:32:02.970464 | orchestrator | 2025-05-13 23:32:02 | INFO  | D [1] -- memcached 2025-05-13 23:32:02.970482 | orchestrator | 2025-05-13 23:32:02 | INFO  | D [1] -- redis 2025-05-13 23:32:02.970502 | orchestrator | 2025-05-13 23:32:02 | INFO  | D [1] -- rabbitmq-ng 2025-05-13 23:32:02.970632 | orchestrator | 2025-05-13 23:32:02 | INFO  | A [0] - kubernetes 2025-05-13 23:32:02.972539 | orchestrator | 2025-05-13 23:32:02 | INFO  | D [1] -- kubeconfig 2025-05-13 23:32:02.972647 | orchestrator | 2025-05-13 23:32:02 | INFO  | A [1] -- copy-kubeconfig 2025-05-13 23:32:02.972754 | orchestrator | 2025-05-13 23:32:02 | INFO  | A [0] - ceph 2025-05-13 23:32:02.974290 | orchestrator | 2025-05-13 23:32:02 | INFO  | A [1] -- ceph-pools 2025-05-13 23:32:02.975155 | orchestrator | 2025-05-13 23:32:02 | INFO  | A [2] --- copy-ceph-keys 2025-05-13 23:32:02.975220 | orchestrator | 2025-05-13 23:32:02 | INFO  | A [3] ---- cephclient 2025-05-13 23:32:02.975261 | orchestrator | 2025-05-13 23:32:02 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-05-13 23:32:02.975276 | orchestrator | 2025-05-13 23:32:02 | INFO  | A [4] ----- wait-for-keystone 2025-05-13 23:32:02.975291 | orchestrator | 2025-05-13 23:32:02 | INFO  | D [5] ------ kolla-ceph-rgw 2025-05-13 23:32:02.975305 | orchestrator | 2025-05-13 23:32:02 | INFO  | D [5] ------ glance 2025-05-13 23:32:02.975319 | orchestrator | 2025-05-13 23:32:02 | INFO  | D [5] ------ cinder 2025-05-13 23:32:02.975334 | orchestrator | 2025-05-13 23:32:02 | INFO  | D [5] ------ nova 2025-05-13 23:32:02.975347 | orchestrator | 2025-05-13 23:32:02 | INFO  | A [4] ----- prometheus 2025-05-13 23:32:02.975361 | orchestrator | 2025-05-13 23:32:02 | INFO  | D [5] ------ grafana 2025-05-13 23:32:03.181428 | orchestrator | 2025-05-13 23:32:03 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-05-13 23:32:03.181532 | orchestrator | 2025-05-13 23:32:03 | INFO  | Tasks are running in the background 2025-05-13 23:32:06.249227 | orchestrator | 2025-05-13 23:32:06 | INFO  | No task IDs specified, wait for all currently running tasks 2025-05-13 23:32:08.431923 | orchestrator | 2025-05-13 23:32:08 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state STARTED 2025-05-13 23:32:08.436422 | orchestrator | 2025-05-13 23:32:08 | INFO  | Task ebfcaffc-5d08-4b09-8f78-b3b1ab507230 is in state STARTED 2025-05-13 23:32:08.436465 | orchestrator | 2025-05-13 23:32:08 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:32:08.437407 | orchestrator | 2025-05-13 23:32:08 | INFO  | Task c01a5f68-f907-4ba3-b267-0823f29e4701 is in state STARTED 2025-05-13 23:32:08.437483 | orchestrator | 2025-05-13 23:32:08 | INFO  | Task a80c8f07-345d-4cb3-b344-4f7d82fa3b34 is in state STARTED 2025-05-13 23:32:08.437906 | orchestrator | 2025-05-13 23:32:08 | INFO  | Task 84fd3660-b731-46fd-82a3-727bf130a991 is in state STARTED 2025-05-13 23:32:08.442122 | orchestrator | 2025-05-13 23:32:08 | INFO  | Task 6e34e8f1-e06f-49d4-aeaa-1e9ac14d61a7 is in state STARTED 2025-05-13 23:32:08.442231 | orchestrator | 2025-05-13 23:32:08 | INFO  | Task 1dfa3fae-3835-4585-85b8-d492ab8e4740 is in state STARTED 2025-05-13 23:32:08.442255 | orchestrator | 2025-05-13 23:32:08 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:32:11.515105 | orchestrator | 2025-05-13 23:32:11 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state STARTED 2025-05-13 23:32:11.523059 | orchestrator | 2025-05-13 23:32:11 | INFO  | Task ebfcaffc-5d08-4b09-8f78-b3b1ab507230 is in state STARTED 2025-05-13 23:32:11.536080 | orchestrator | 2025-05-13 23:32:11 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:32:11.536997 | orchestrator | 2025-05-13 23:32:11 | INFO  | Task c01a5f68-f907-4ba3-b267-0823f29e4701 is in state STARTED 2025-05-13 23:32:11.537722 | orchestrator | 2025-05-13 23:32:11 | INFO  | Task a80c8f07-345d-4cb3-b344-4f7d82fa3b34 is in state STARTED 2025-05-13 23:32:11.553383 | orchestrator | 2025-05-13 23:32:11 | INFO  | Task 84fd3660-b731-46fd-82a3-727bf130a991 is in state STARTED 2025-05-13 23:32:11.553431 | orchestrator | 2025-05-13 23:32:11 | INFO  | Task 6e34e8f1-e06f-49d4-aeaa-1e9ac14d61a7 is in state STARTED 2025-05-13 23:32:11.553443 | orchestrator | 2025-05-13 23:32:11 | INFO  | Task 1dfa3fae-3835-4585-85b8-d492ab8e4740 is in state STARTED 2025-05-13 23:32:11.553454 | orchestrator | 2025-05-13 23:32:11 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:32:14.598260 | orchestrator | 2025-05-13 23:32:14 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state STARTED 2025-05-13 23:32:14.598486 | orchestrator | 2025-05-13 23:32:14 | INFO  | Task ebfcaffc-5d08-4b09-8f78-b3b1ab507230 is in state STARTED 2025-05-13 23:32:14.599252 | orchestrator | 2025-05-13 23:32:14 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:32:14.599683 | orchestrator | 2025-05-13 23:32:14 | INFO  | Task c01a5f68-f907-4ba3-b267-0823f29e4701 is in state STARTED 2025-05-13 23:32:14.600328 | orchestrator | 2025-05-13 23:32:14 | INFO  | Task a80c8f07-345d-4cb3-b344-4f7d82fa3b34 is in state STARTED 2025-05-13 23:32:14.601487 | orchestrator | 2025-05-13 23:32:14 | INFO  | Task 84fd3660-b731-46fd-82a3-727bf130a991 is in state STARTED 2025-05-13 23:32:14.602013 | orchestrator | 2025-05-13 23:32:14 | INFO  | Task 6e34e8f1-e06f-49d4-aeaa-1e9ac14d61a7 is in state STARTED 2025-05-13 23:32:14.605158 | orchestrator | 2025-05-13 23:32:14 | INFO  | Task 1dfa3fae-3835-4585-85b8-d492ab8e4740 is in state STARTED 2025-05-13 23:32:14.605207 | orchestrator | 2025-05-13 23:32:14 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:32:17.645858 | orchestrator | 2025-05-13 23:32:17 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state STARTED 2025-05-13 23:32:17.647502 | orchestrator | 2025-05-13 23:32:17 | INFO  | Task ebfcaffc-5d08-4b09-8f78-b3b1ab507230 is in state STARTED 2025-05-13 23:32:17.648055 | orchestrator | 2025-05-13 23:32:17 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:32:17.648500 | orchestrator | 2025-05-13 23:32:17 | INFO  | Task c01a5f68-f907-4ba3-b267-0823f29e4701 is in state STARTED 2025-05-13 23:32:17.650187 | orchestrator | 2025-05-13 23:32:17 | INFO  | Task a80c8f07-345d-4cb3-b344-4f7d82fa3b34 is in state SUCCESS 2025-05-13 23:32:17.650733 | orchestrator | 2025-05-13 23:32:17 | INFO  | Task 84fd3660-b731-46fd-82a3-727bf130a991 is in state STARTED 2025-05-13 23:32:17.655165 | orchestrator | 2025-05-13 23:32:17 | INFO  | Task 6e34e8f1-e06f-49d4-aeaa-1e9ac14d61a7 is in state STARTED 2025-05-13 23:32:17.659022 | orchestrator | 2025-05-13 23:32:17 | INFO  | Task 1dfa3fae-3835-4585-85b8-d492ab8e4740 is in state STARTED 2025-05-13 23:32:17.659097 | orchestrator | 2025-05-13 23:32:17 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:32:20.719210 | orchestrator | 2025-05-13 23:32:20 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state STARTED 2025-05-13 23:32:20.734504 | orchestrator | 2025-05-13 23:32:20 | INFO  | Task ebfcaffc-5d08-4b09-8f78-b3b1ab507230 is in state STARTED 2025-05-13 23:32:20.734659 | orchestrator | 2025-05-13 23:32:20 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:32:20.734676 | orchestrator | 2025-05-13 23:32:20 | INFO  | Task c01a5f68-f907-4ba3-b267-0823f29e4701 is in state STARTED 2025-05-13 23:32:20.734688 | orchestrator | 2025-05-13 23:32:20 | INFO  | Task 84fd3660-b731-46fd-82a3-727bf130a991 is in state STARTED 2025-05-13 23:32:20.734706 | orchestrator | 2025-05-13 23:32:20 | INFO  | Task 6e34e8f1-e06f-49d4-aeaa-1e9ac14d61a7 is in state SUCCESS 2025-05-13 23:32:20.734725 | orchestrator | 2025-05-13 23:32:20 | INFO  | Task 1dfa3fae-3835-4585-85b8-d492ab8e4740 is in state STARTED 2025-05-13 23:32:20.734744 | orchestrator | 2025-05-13 23:32:20 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:32:23.766887 | orchestrator | 2025-05-13 23:32:23 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state STARTED 2025-05-13 23:32:23.771108 | orchestrator | 2025-05-13 23:32:23 | INFO  | Task ebfcaffc-5d08-4b09-8f78-b3b1ab507230 is in state STARTED 2025-05-13 23:32:23.771532 | orchestrator | 2025-05-13 23:32:23 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:32:23.776828 | orchestrator | 2025-05-13 23:32:23 | INFO  | Task c01a5f68-f907-4ba3-b267-0823f29e4701 is in state STARTED 2025-05-13 23:32:23.777456 | orchestrator | 2025-05-13 23:32:23 | INFO  | Task 84fd3660-b731-46fd-82a3-727bf130a991 is in state STARTED 2025-05-13 23:32:23.778086 | orchestrator | 2025-05-13 23:32:23 | INFO  | Task 6574ade5-3f58-402c-8acd-a95c8cafe789 is in state STARTED 2025-05-13 23:32:23.778703 | orchestrator | 2025-05-13 23:32:23 | INFO  | Task 1dfa3fae-3835-4585-85b8-d492ab8e4740 is in state STARTED 2025-05-13 23:32:23.778727 | orchestrator | 2025-05-13 23:32:23 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:32:26.867687 | orchestrator | 2025-05-13 23:32:26 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state STARTED 2025-05-13 23:32:26.867868 | orchestrator | 2025-05-13 23:32:26 | INFO  | Task ebfcaffc-5d08-4b09-8f78-b3b1ab507230 is in state STARTED 2025-05-13 23:32:26.868273 | orchestrator | 2025-05-13 23:32:26 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:32:26.869281 | orchestrator | 2025-05-13 23:32:26 | INFO  | Task c01a5f68-f907-4ba3-b267-0823f29e4701 is in state STARTED 2025-05-13 23:32:26.872591 | orchestrator | 2025-05-13 23:32:26 | INFO  | Task 84fd3660-b731-46fd-82a3-727bf130a991 is in state STARTED 2025-05-13 23:32:26.872667 | orchestrator | 2025-05-13 23:32:26 | INFO  | Task 6574ade5-3f58-402c-8acd-a95c8cafe789 is in state STARTED 2025-05-13 23:32:26.873066 | orchestrator | 2025-05-13 23:32:26 | INFO  | Task 1dfa3fae-3835-4585-85b8-d492ab8e4740 is in state STARTED 2025-05-13 23:32:26.873089 | orchestrator | 2025-05-13 23:32:26 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:32:29.996203 | orchestrator | 2025-05-13 23:32:29 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state STARTED 2025-05-13 23:32:29.996751 | orchestrator | 2025-05-13 23:32:29 | INFO  | Task ebfcaffc-5d08-4b09-8f78-b3b1ab507230 is in state STARTED 2025-05-13 23:32:29.997104 | orchestrator | 2025-05-13 23:32:29 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:32:30.001009 | orchestrator | 2025-05-13 23:32:29 | INFO  | Task c01a5f68-f907-4ba3-b267-0823f29e4701 is in state STARTED 2025-05-13 23:32:30.001418 | orchestrator | 2025-05-13 23:32:29 | INFO  | Task 84fd3660-b731-46fd-82a3-727bf130a991 is in state STARTED 2025-05-13 23:32:30.004685 | orchestrator | 2025-05-13 23:32:30 | INFO  | Task 6574ade5-3f58-402c-8acd-a95c8cafe789 is in state STARTED 2025-05-13 23:32:30.004816 | orchestrator | 2025-05-13 23:32:30 | INFO  | Task 1dfa3fae-3835-4585-85b8-d492ab8e4740 is in state STARTED 2025-05-13 23:32:30.008422 | orchestrator | 2025-05-13 23:32:30 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:32:33.100858 | orchestrator | 2025-05-13 23:32:33 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state STARTED 2025-05-13 23:32:33.100956 | orchestrator | 2025-05-13 23:32:33 | INFO  | Task ebfcaffc-5d08-4b09-8f78-b3b1ab507230 is in state STARTED 2025-05-13 23:32:33.100969 | orchestrator | 2025-05-13 23:32:33 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:32:33.101417 | orchestrator | 2025-05-13 23:32:33 | INFO  | Task c01a5f68-f907-4ba3-b267-0823f29e4701 is in state STARTED 2025-05-13 23:32:33.103794 | orchestrator | 2025-05-13 23:32:33 | INFO  | Task 84fd3660-b731-46fd-82a3-727bf130a991 is in state STARTED 2025-05-13 23:32:33.104401 | orchestrator | 2025-05-13 23:32:33 | INFO  | Task 6574ade5-3f58-402c-8acd-a95c8cafe789 is in state STARTED 2025-05-13 23:32:33.104439 | orchestrator | 2025-05-13 23:32:33 | INFO  | Task 1dfa3fae-3835-4585-85b8-d492ab8e4740 is in state STARTED 2025-05-13 23:32:33.104491 | orchestrator | 2025-05-13 23:32:33 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:32:36.170672 | orchestrator | 2025-05-13 23:32:36 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state STARTED 2025-05-13 23:32:36.171686 | orchestrator | 2025-05-13 23:32:36 | INFO  | Task ebfcaffc-5d08-4b09-8f78-b3b1ab507230 is in state STARTED 2025-05-13 23:32:36.177990 | orchestrator | 2025-05-13 23:32:36 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:32:36.183404 | orchestrator | None 2025-05-13 23:32:36.183484 | orchestrator | [WARNING]: Invalid characters were found in group names but not replaced, use 2025-05-13 23:32:36.183497 | orchestrator | -vvvv to see details 2025-05-13 23:32:36.183510 | orchestrator | 2025-05-13 23:32:36.183521 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-05-13 23:32:36.183533 | orchestrator | 2025-05-13 23:32:36.183544 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-13 23:32:36.183557 | orchestrator | fatal: [testbed-manager]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.5\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.5: Permission denied (publickey).\r\n", "unreachable": true} 2025-05-13 23:32:36.183613 | orchestrator | 2025-05-13 23:32:36.183632 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 23:32:36.183644 | orchestrator | testbed-manager : ok=0 changed=0 unreachable=1  failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 23:32:36.183656 | orchestrator | 2025-05-13 23:32:36.183667 | orchestrator | 2025-05-13 23:32:36.183678 | orchestrator | 2025-05-13 23:32:36.183688 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-05-13 23:32:36.183699 | orchestrator | 2025-05-13 23:32:36.183710 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-05-13 23:32:36.183721 | orchestrator | Tuesday 13 May 2025 23:32:17 +0000 (0:00:00.824) 0:00:00.824 *********** 2025-05-13 23:32:36.183734 | orchestrator | changed: [testbed-manager] 2025-05-13 23:32:36.183753 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:32:36.183772 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:32:36.183790 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:32:36.183808 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:32:36.183826 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:32:36.183844 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:32:36.183855 | orchestrator | 2025-05-13 23:32:36.183866 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-05-13 23:32:36.183877 | orchestrator | Tuesday 13 May 2025 23:32:22 +0000 (0:00:04.210) 0:00:05.035 *********** 2025-05-13 23:32:36.183888 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-05-13 23:32:36.183899 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-05-13 23:32:36.183910 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-05-13 23:32:36.183922 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-05-13 23:32:36.183934 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-05-13 23:32:36.183947 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-05-13 23:32:36.183958 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-05-13 23:32:36.183970 | orchestrator | 2025-05-13 23:32:36.183983 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-05-13 23:32:36.184002 | orchestrator | Tuesday 13 May 2025 23:32:23 +0000 (0:00:01.835) 0:00:06.871 *********** 2025-05-13 23:32:36.184016 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-13 23:32:22.590948', 'end': '2025-05-13 23:32:22.600187', 'delta': '0:00:00.009239', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-13 23:32:36.184051 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-13 23:32:22.587738', 'end': '2025-05-13 23:32:22.591657', 'delta': '0:00:00.003919', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-13 23:32:36.184086 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-13 23:32:22.683662', 'end': '2025-05-13 23:32:22.691303', 'delta': '0:00:00.007641', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-13 23:32:36.184100 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-13 23:32:22.906982', 'end': '2025-05-13 23:32:22.915970', 'delta': '0:00:00.008988', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-13 23:32:36.184116 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-13 23:32:23.233549', 'end': '2025-05-13 23:32:23.243248', 'delta': '0:00:00.009699', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-13 23:32:36.184128 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-13 23:32:23.495893', 'end': '2025-05-13 23:32:23.504741', 'delta': '0:00:00.008848', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-13 23:32:36.184153 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-13 23:32:23.582192', 'end': '2025-05-13 23:32:23.589643', 'delta': '0:00:00.007451', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-13 23:32:36.184165 | orchestrator | 2025-05-13 23:32:36.184176 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-05-13 23:32:36.184187 | orchestrator | Tuesday 13 May 2025 23:32:26 +0000 (0:00:02.900) 0:00:09.771 *********** 2025-05-13 23:32:36.184198 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-05-13 23:32:36.184209 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-05-13 23:32:36.184219 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-05-13 23:32:36.184236 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-05-13 23:32:36.184247 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-05-13 23:32:36.184257 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-05-13 23:32:36.184268 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-05-13 23:32:36.184279 | orchestrator | 2025-05-13 23:32:36.184290 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-05-13 23:32:36.184300 | orchestrator | Tuesday 13 May 2025 23:32:29 +0000 (0:00:02.991) 0:00:12.763 *********** 2025-05-13 23:32:36.184311 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-05-13 23:32:36.184322 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-05-13 23:32:36.184332 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-05-13 23:32:36.184343 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-05-13 23:32:36.184354 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-05-13 23:32:36.184364 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-05-13 23:32:36.184375 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-05-13 23:32:36.184386 | orchestrator | 2025-05-13 23:32:36.184396 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 23:32:36.184407 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 23:32:36.184418 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 23:32:36.184429 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 23:32:36.184440 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 23:32:36.184459 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 23:32:36.184469 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 23:32:36.184480 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 23:32:36.184491 | orchestrator | 2025-05-13 23:32:36.184502 | orchestrator | 2025-05-13 23:32:36.184513 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 23:32:36.184528 | orchestrator | Tuesday 13 May 2025 23:32:33 +0000 (0:00:03.454) 0:00:16.218 *********** 2025-05-13 23:32:36.184539 | orchestrator | =============================================================================== 2025-05-13 23:32:36.184550 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.21s 2025-05-13 23:32:36.184560 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 3.45s 2025-05-13 23:32:36.184571 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 2.99s 2025-05-13 23:32:36.184612 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.90s 2025-05-13 23:32:36.184623 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.84s 2025-05-13 23:32:36.184663 | orchestrator | 2025-05-13 23:32:36 | INFO  | Task c01a5f68-f907-4ba3-b267-0823f29e4701 is in state SUCCESS 2025-05-13 23:32:36.184743 | orchestrator | 2025-05-13 23:32:36 | INFO  | Task 84fd3660-b731-46fd-82a3-727bf130a991 is in state STARTED 2025-05-13 23:32:36.187834 | orchestrator | 2025-05-13 23:32:36 | INFO  | Task 6574ade5-3f58-402c-8acd-a95c8cafe789 is in state STARTED 2025-05-13 23:32:36.189934 | orchestrator | 2025-05-13 23:32:36 | INFO  | Task 1dfa3fae-3835-4585-85b8-d492ab8e4740 is in state STARTED 2025-05-13 23:32:36.189988 | orchestrator | 2025-05-13 23:32:36 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:32:39.244053 | orchestrator | 2025-05-13 23:32:39 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state STARTED 2025-05-13 23:32:39.244161 | orchestrator | 2025-05-13 23:32:39 | INFO  | Task ebfcaffc-5d08-4b09-8f78-b3b1ab507230 is in state STARTED 2025-05-13 23:32:39.247168 | orchestrator | 2025-05-13 23:32:39 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:32:39.248988 | orchestrator | 2025-05-13 23:32:39 | INFO  | Task 84fd3660-b731-46fd-82a3-727bf130a991 is in state STARTED 2025-05-13 23:32:39.249703 | orchestrator | 2025-05-13 23:32:39 | INFO  | Task 6574ade5-3f58-402c-8acd-a95c8cafe789 is in state STARTED 2025-05-13 23:32:39.251249 | orchestrator | 2025-05-13 23:32:39 | INFO  | Task 1dfa3fae-3835-4585-85b8-d492ab8e4740 is in state STARTED 2025-05-13 23:32:39.251372 | orchestrator | 2025-05-13 23:32:39 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:32:42.307943 | orchestrator | 2025-05-13 23:32:42 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state STARTED 2025-05-13 23:32:42.308200 | orchestrator | 2025-05-13 23:32:42 | INFO  | Task ebfcaffc-5d08-4b09-8f78-b3b1ab507230 is in state STARTED 2025-05-13 23:32:42.309304 | orchestrator | 2025-05-13 23:32:42 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:32:42.310158 | orchestrator | 2025-05-13 23:32:42 | INFO  | Task 84fd3660-b731-46fd-82a3-727bf130a991 is in state STARTED 2025-05-13 23:32:42.311235 | orchestrator | 2025-05-13 23:32:42 | INFO  | Task 6574ade5-3f58-402c-8acd-a95c8cafe789 is in state STARTED 2025-05-13 23:32:42.312268 | orchestrator | 2025-05-13 23:32:42 | INFO  | Task 1dfa3fae-3835-4585-85b8-d492ab8e4740 is in state STARTED 2025-05-13 23:32:42.312350 | orchestrator | 2025-05-13 23:32:42 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:32:45.359372 | orchestrator | 2025-05-13 23:32:45 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state STARTED 2025-05-13 23:32:45.361142 | orchestrator | 2025-05-13 23:32:45 | INFO  | Task ebfcaffc-5d08-4b09-8f78-b3b1ab507230 is in state STARTED 2025-05-13 23:32:45.362077 | orchestrator | 2025-05-13 23:32:45 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:32:45.363203 | orchestrator | 2025-05-13 23:32:45 | INFO  | Task 84fd3660-b731-46fd-82a3-727bf130a991 is in state STARTED 2025-05-13 23:32:45.364065 | orchestrator | 2025-05-13 23:32:45 | INFO  | Task 6574ade5-3f58-402c-8acd-a95c8cafe789 is in state STARTED 2025-05-13 23:32:45.365210 | orchestrator | 2025-05-13 23:32:45 | INFO  | Task 1dfa3fae-3835-4585-85b8-d492ab8e4740 is in state STARTED 2025-05-13 23:32:45.365240 | orchestrator | 2025-05-13 23:32:45 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:32:48.546383 | orchestrator | 2025-05-13 23:32:48 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state STARTED 2025-05-13 23:32:48.551165 | orchestrator | 2025-05-13 23:32:48 | INFO  | Task ebfcaffc-5d08-4b09-8f78-b3b1ab507230 is in state STARTED 2025-05-13 23:32:48.553245 | orchestrator | 2025-05-13 23:32:48 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:32:48.555138 | orchestrator | 2025-05-13 23:32:48 | INFO  | Task 84fd3660-b731-46fd-82a3-727bf130a991 is in state STARTED 2025-05-13 23:32:48.556011 | orchestrator | 2025-05-13 23:32:48 | INFO  | Task 6574ade5-3f58-402c-8acd-a95c8cafe789 is in state STARTED 2025-05-13 23:32:48.559332 | orchestrator | 2025-05-13 23:32:48 | INFO  | Task 1dfa3fae-3835-4585-85b8-d492ab8e4740 is in state STARTED 2025-05-13 23:32:48.559388 | orchestrator | 2025-05-13 23:32:48 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:32:51.737612 | orchestrator | 2025-05-13 23:32:51 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state STARTED 2025-05-13 23:32:51.738525 | orchestrator | 2025-05-13 23:32:51 | INFO  | Task ebfcaffc-5d08-4b09-8f78-b3b1ab507230 is in state STARTED 2025-05-13 23:32:51.741225 | orchestrator | 2025-05-13 23:32:51 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:32:51.743898 | orchestrator | 2025-05-13 23:32:51 | INFO  | Task 84fd3660-b731-46fd-82a3-727bf130a991 is in state STARTED 2025-05-13 23:32:51.748063 | orchestrator | 2025-05-13 23:32:51 | INFO  | Task 6574ade5-3f58-402c-8acd-a95c8cafe789 is in state STARTED 2025-05-13 23:32:51.749503 | orchestrator | 2025-05-13 23:32:51 | INFO  | Task 1dfa3fae-3835-4585-85b8-d492ab8e4740 is in state STARTED 2025-05-13 23:32:51.749564 | orchestrator | 2025-05-13 23:32:51 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:32:54.845542 | orchestrator | 2025-05-13 23:32:54 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state STARTED 2025-05-13 23:32:54.848017 | orchestrator | 2025-05-13 23:32:54 | INFO  | Task ebfcaffc-5d08-4b09-8f78-b3b1ab507230 is in state STARTED 2025-05-13 23:32:54.850011 | orchestrator | 2025-05-13 23:32:54 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:32:54.853834 | orchestrator | 2025-05-13 23:32:54 | INFO  | Task 84fd3660-b731-46fd-82a3-727bf130a991 is in state STARTED 2025-05-13 23:32:54.853882 | orchestrator | 2025-05-13 23:32:54 | INFO  | Task 6574ade5-3f58-402c-8acd-a95c8cafe789 is in state STARTED 2025-05-13 23:32:54.854124 | orchestrator | 2025-05-13 23:32:54 | INFO  | Task 1dfa3fae-3835-4585-85b8-d492ab8e4740 is in state SUCCESS 2025-05-13 23:32:54.854218 | orchestrator | 2025-05-13 23:32:54 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:32:57.915714 | orchestrator | 2025-05-13 23:32:57 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state STARTED 2025-05-13 23:32:57.919636 | orchestrator | 2025-05-13 23:32:57 | INFO  | Task ebfcaffc-5d08-4b09-8f78-b3b1ab507230 is in state STARTED 2025-05-13 23:32:57.919820 | orchestrator | 2025-05-13 23:32:57 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:32:57.922458 | orchestrator | 2025-05-13 23:32:57 | INFO  | Task 84fd3660-b731-46fd-82a3-727bf130a991 is in state STARTED 2025-05-13 23:32:57.924568 | orchestrator | 2025-05-13 23:32:57 | INFO  | Task 6574ade5-3f58-402c-8acd-a95c8cafe789 is in state STARTED 2025-05-13 23:32:57.924864 | orchestrator | 2025-05-13 23:32:57 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:33:00.993772 | orchestrator | 2025-05-13 23:33:00 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state STARTED 2025-05-13 23:33:00.996507 | orchestrator | 2025-05-13 23:33:00 | INFO  | Task ebfcaffc-5d08-4b09-8f78-b3b1ab507230 is in state STARTED 2025-05-13 23:33:00.997957 | orchestrator | 2025-05-13 23:33:00 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:33:01.001609 | orchestrator | 2025-05-13 23:33:00 | INFO  | Task 84fd3660-b731-46fd-82a3-727bf130a991 is in state STARTED 2025-05-13 23:33:01.007157 | orchestrator | 2025-05-13 23:33:01 | INFO  | Task 6574ade5-3f58-402c-8acd-a95c8cafe789 is in state STARTED 2025-05-13 23:33:01.007252 | orchestrator | 2025-05-13 23:33:01 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:33:04.060470 | orchestrator | 2025-05-13 23:33:04 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state STARTED 2025-05-13 23:33:04.065113 | orchestrator | 2025-05-13 23:33:04 | INFO  | Task ebfcaffc-5d08-4b09-8f78-b3b1ab507230 is in state STARTED 2025-05-13 23:33:04.065183 | orchestrator | 2025-05-13 23:33:04 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:33:04.066066 | orchestrator | 2025-05-13 23:33:04 | INFO  | Task 84fd3660-b731-46fd-82a3-727bf130a991 is in state STARTED 2025-05-13 23:33:04.069280 | orchestrator | 2025-05-13 23:33:04 | INFO  | Task 6574ade5-3f58-402c-8acd-a95c8cafe789 is in state STARTED 2025-05-13 23:33:04.069338 | orchestrator | 2025-05-13 23:33:04 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:33:07.122496 | orchestrator | 2025-05-13 23:33:07 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state STARTED 2025-05-13 23:33:07.123662 | orchestrator | 2025-05-13 23:33:07 | INFO  | Task ebfcaffc-5d08-4b09-8f78-b3b1ab507230 is in state STARTED 2025-05-13 23:33:07.124499 | orchestrator | 2025-05-13 23:33:07 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:33:07.128781 | orchestrator | 2025-05-13 23:33:07 | INFO  | Task 84fd3660-b731-46fd-82a3-727bf130a991 is in state STARTED 2025-05-13 23:33:07.131741 | orchestrator | 2025-05-13 23:33:07 | INFO  | Task 6574ade5-3f58-402c-8acd-a95c8cafe789 is in state STARTED 2025-05-13 23:33:07.131802 | orchestrator | 2025-05-13 23:33:07 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:33:10.195697 | orchestrator | 2025-05-13 23:33:10 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state STARTED 2025-05-13 23:33:10.196806 | orchestrator | 2025-05-13 23:33:10 | INFO  | Task ebfcaffc-5d08-4b09-8f78-b3b1ab507230 is in state STARTED 2025-05-13 23:33:10.203710 | orchestrator | 2025-05-13 23:33:10 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:33:10.207129 | orchestrator | 2025-05-13 23:33:10 | INFO  | Task 84fd3660-b731-46fd-82a3-727bf130a991 is in state STARTED 2025-05-13 23:33:10.211873 | orchestrator | 2025-05-13 23:33:10 | INFO  | Task 6574ade5-3f58-402c-8acd-a95c8cafe789 is in state STARTED 2025-05-13 23:33:10.211914 | orchestrator | 2025-05-13 23:33:10 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:33:13.257139 | orchestrator | 2025-05-13 23:33:13 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state STARTED 2025-05-13 23:33:13.260782 | orchestrator | 2025-05-13 23:33:13 | INFO  | Task ebfcaffc-5d08-4b09-8f78-b3b1ab507230 is in state STARTED 2025-05-13 23:33:13.263996 | orchestrator | 2025-05-13 23:33:13 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:33:13.264557 | orchestrator | 2025-05-13 23:33:13 | INFO  | Task 84fd3660-b731-46fd-82a3-727bf130a991 is in state STARTED 2025-05-13 23:33:13.266814 | orchestrator | 2025-05-13 23:33:13 | INFO  | Task 6574ade5-3f58-402c-8acd-a95c8cafe789 is in state STARTED 2025-05-13 23:33:13.266856 | orchestrator | 2025-05-13 23:33:13 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:33:16.362455 | orchestrator | 2025-05-13 23:33:16 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state STARTED 2025-05-13 23:33:16.366323 | orchestrator | 2025-05-13 23:33:16 | INFO  | Task ebfcaffc-5d08-4b09-8f78-b3b1ab507230 is in state STARTED 2025-05-13 23:33:16.371043 | orchestrator | 2025-05-13 23:33:16 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:33:16.371096 | orchestrator | 2025-05-13 23:33:16 | INFO  | Task 84fd3660-b731-46fd-82a3-727bf130a991 is in state STARTED 2025-05-13 23:33:16.372895 | orchestrator | 2025-05-13 23:33:16 | INFO  | Task 6574ade5-3f58-402c-8acd-a95c8cafe789 is in state STARTED 2025-05-13 23:33:16.372933 | orchestrator | 2025-05-13 23:33:16 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:33:19.443807 | orchestrator | 2025-05-13 23:33:19 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state STARTED 2025-05-13 23:33:19.448695 | orchestrator | 2025-05-13 23:33:19 | INFO  | Task ebfcaffc-5d08-4b09-8f78-b3b1ab507230 is in state STARTED 2025-05-13 23:33:19.450627 | orchestrator | 2025-05-13 23:33:19 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:33:19.453534 | orchestrator | 2025-05-13 23:33:19 | INFO  | Task 84fd3660-b731-46fd-82a3-727bf130a991 is in state STARTED 2025-05-13 23:33:19.456408 | orchestrator | 2025-05-13 23:33:19 | INFO  | Task 6574ade5-3f58-402c-8acd-a95c8cafe789 is in state STARTED 2025-05-13 23:33:19.458196 | orchestrator | 2025-05-13 23:33:19 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:33:22.526923 | orchestrator | 2025-05-13 23:33:22 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state STARTED 2025-05-13 23:33:22.528774 | orchestrator | 2025-05-13 23:33:22 | INFO  | Task ebfcaffc-5d08-4b09-8f78-b3b1ab507230 is in state STARTED 2025-05-13 23:33:22.531275 | orchestrator | 2025-05-13 23:33:22 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:33:22.532848 | orchestrator | 2025-05-13 23:33:22 | INFO  | Task 84fd3660-b731-46fd-82a3-727bf130a991 is in state STARTED 2025-05-13 23:33:22.535822 | orchestrator | 2025-05-13 23:33:22 | INFO  | Task 6574ade5-3f58-402c-8acd-a95c8cafe789 is in state STARTED 2025-05-13 23:33:22.535863 | orchestrator | 2025-05-13 23:33:22 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:33:25.591287 | orchestrator | 2025-05-13 23:33:25 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state STARTED 2025-05-13 23:33:25.593419 | orchestrator | 2025-05-13 23:33:25 | INFO  | Task ebfcaffc-5d08-4b09-8f78-b3b1ab507230 is in state STARTED 2025-05-13 23:33:25.595231 | orchestrator | 2025-05-13 23:33:25 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:33:25.596556 | orchestrator | 2025-05-13 23:33:25 | INFO  | Task 84fd3660-b731-46fd-82a3-727bf130a991 is in state STARTED 2025-05-13 23:33:25.598560 | orchestrator | 2025-05-13 23:33:25 | INFO  | Task 6574ade5-3f58-402c-8acd-a95c8cafe789 is in state STARTED 2025-05-13 23:33:25.599486 | orchestrator | 2025-05-13 23:33:25 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:33:28.666211 | orchestrator | 2025-05-13 23:33:28.666314 | orchestrator | 2025-05-13 23:33:28.666325 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-05-13 23:33:28.666333 | orchestrator | 2025-05-13 23:33:28.666341 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-05-13 23:33:28.666350 | orchestrator | Tuesday 13 May 2025 23:32:18 +0000 (0:00:01.077) 0:00:01.077 *********** 2025-05-13 23:33:28.666357 | orchestrator | ok: [testbed-manager] => { 2025-05-13 23:33:28.666367 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-05-13 23:33:28.666376 | orchestrator | } 2025-05-13 23:33:28.666384 | orchestrator | 2025-05-13 23:33:28.666391 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-05-13 23:33:28.666398 | orchestrator | Tuesday 13 May 2025 23:32:19 +0000 (0:00:00.671) 0:00:01.749 *********** 2025-05-13 23:33:28.666405 | orchestrator | ok: [testbed-manager] 2025-05-13 23:33:28.666412 | orchestrator | 2025-05-13 23:33:28.666419 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-05-13 23:33:28.666425 | orchestrator | Tuesday 13 May 2025 23:32:20 +0000 (0:00:01.569) 0:00:03.318 *********** 2025-05-13 23:33:28.666432 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-05-13 23:33:28.666440 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-05-13 23:33:28.666447 | orchestrator | 2025-05-13 23:33:28.666454 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-05-13 23:33:28.666461 | orchestrator | Tuesday 13 May 2025 23:32:21 +0000 (0:00:01.066) 0:00:04.384 *********** 2025-05-13 23:33:28.666468 | orchestrator | changed: [testbed-manager] 2025-05-13 23:33:28.666474 | orchestrator | 2025-05-13 23:33:28.666481 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-05-13 23:33:28.666488 | orchestrator | Tuesday 13 May 2025 23:32:25 +0000 (0:00:03.062) 0:00:07.447 *********** 2025-05-13 23:33:28.666495 | orchestrator | changed: [testbed-manager] 2025-05-13 23:33:28.666502 | orchestrator | 2025-05-13 23:33:28.666509 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-05-13 23:33:28.666516 | orchestrator | Tuesday 13 May 2025 23:32:26 +0000 (0:00:01.198) 0:00:08.646 *********** 2025-05-13 23:33:28.666524 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-05-13 23:33:28.666531 | orchestrator | ok: [testbed-manager] 2025-05-13 23:33:28.666538 | orchestrator | 2025-05-13 23:33:28.666545 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-05-13 23:33:28.666552 | orchestrator | Tuesday 13 May 2025 23:32:51 +0000 (0:00:25.189) 0:00:33.835 *********** 2025-05-13 23:33:28.666560 | orchestrator | changed: [testbed-manager] 2025-05-13 23:33:28.666566 | orchestrator | 2025-05-13 23:33:28.666574 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 23:33:28.666582 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 23:33:28.666591 | orchestrator | 2025-05-13 23:33:28.666667 | orchestrator | 2025-05-13 23:33:28.666675 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 23:33:28.666682 | orchestrator | Tuesday 13 May 2025 23:32:53 +0000 (0:00:02.076) 0:00:35.912 *********** 2025-05-13 23:33:28.666720 | orchestrator | =============================================================================== 2025-05-13 23:33:28.666728 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 25.19s 2025-05-13 23:33:28.666736 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 3.06s 2025-05-13 23:33:28.666743 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 2.08s 2025-05-13 23:33:28.666751 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.57s 2025-05-13 23:33:28.666758 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.20s 2025-05-13 23:33:28.666775 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.07s 2025-05-13 23:33:28.666783 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.67s 2025-05-13 23:33:28.666790 | orchestrator | 2025-05-13 23:33:28.666797 | orchestrator | 2025-05-13 23:33:28.666805 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-13 23:33:28.666813 | orchestrator | 2025-05-13 23:33:28.666820 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-13 23:33:28.666827 | orchestrator | Tuesday 13 May 2025 23:32:18 +0000 (0:00:01.048) 0:00:01.048 *********** 2025-05-13 23:33:28.666834 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-05-13 23:33:28.666852 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-05-13 23:33:28.666859 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-05-13 23:33:28.666867 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-05-13 23:33:28.666874 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-05-13 23:33:28.666880 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-05-13 23:33:28.666887 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-05-13 23:33:28.666894 | orchestrator | 2025-05-13 23:33:28.666900 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-05-13 23:33:28.666906 | orchestrator | 2025-05-13 23:33:28.666912 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-05-13 23:33:28.666919 | orchestrator | Tuesday 13 May 2025 23:32:21 +0000 (0:00:02.660) 0:00:03.709 *********** 2025-05-13 23:33:28.666932 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:33:28.666941 | orchestrator | 2025-05-13 23:33:28.666966 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-05-13 23:33:28.666972 | orchestrator | Tuesday 13 May 2025 23:32:23 +0000 (0:00:02.028) 0:00:05.737 *********** 2025-05-13 23:33:28.666979 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:33:28.666986 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:33:28.666992 | orchestrator | ok: [testbed-manager] 2025-05-13 23:33:28.666999 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:33:28.667006 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:33:28.667012 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:33:28.667019 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:33:28.667025 | orchestrator | 2025-05-13 23:33:28.667032 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-05-13 23:33:28.667038 | orchestrator | Tuesday 13 May 2025 23:32:26 +0000 (0:00:03.250) 0:00:08.988 *********** 2025-05-13 23:33:28.667045 | orchestrator | ok: [testbed-manager] 2025-05-13 23:33:28.667052 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:33:28.667058 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:33:28.667065 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:33:28.667071 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:33:28.667076 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:33:28.667082 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:33:28.667088 | orchestrator | 2025-05-13 23:33:28.667101 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-05-13 23:33:28.667107 | orchestrator | Tuesday 13 May 2025 23:32:31 +0000 (0:00:04.737) 0:00:13.725 *********** 2025-05-13 23:33:28.667112 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:33:28.667119 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:33:28.667125 | orchestrator | changed: [testbed-manager] 2025-05-13 23:33:28.667132 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:33:28.667138 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:33:28.667144 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:33:28.667150 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:33:28.667156 | orchestrator | 2025-05-13 23:33:28.667163 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-05-13 23:33:28.667171 | orchestrator | Tuesday 13 May 2025 23:32:33 +0000 (0:00:02.180) 0:00:15.905 *********** 2025-05-13 23:33:28.667177 | orchestrator | changed: [testbed-manager] 2025-05-13 23:33:28.667184 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:33:28.667191 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:33:28.667197 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:33:28.667203 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:33:28.667209 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:33:28.667215 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:33:28.667220 | orchestrator | 2025-05-13 23:33:28.667226 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-05-13 23:33:28.667232 | orchestrator | Tuesday 13 May 2025 23:32:43 +0000 (0:00:09.672) 0:00:25.578 *********** 2025-05-13 23:33:28.667238 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:33:28.667244 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:33:28.667250 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:33:28.667255 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:33:28.667261 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:33:28.667266 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:33:28.667273 | orchestrator | changed: [testbed-manager] 2025-05-13 23:33:28.667278 | orchestrator | 2025-05-13 23:33:28.667284 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-05-13 23:33:28.667290 | orchestrator | Tuesday 13 May 2025 23:33:02 +0000 (0:00:19.543) 0:00:45.121 *********** 2025-05-13 23:33:28.667297 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:33:28.667305 | orchestrator | 2025-05-13 23:33:28.667311 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-05-13 23:33:28.667317 | orchestrator | Tuesday 13 May 2025 23:33:04 +0000 (0:00:01.666) 0:00:46.788 *********** 2025-05-13 23:33:28.667323 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-05-13 23:33:28.667330 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-05-13 23:33:28.667336 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-05-13 23:33:28.667342 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-05-13 23:33:28.667348 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-05-13 23:33:28.667355 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-05-13 23:33:28.667362 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-05-13 23:33:28.667368 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-05-13 23:33:28.667374 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-05-13 23:33:28.667380 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-05-13 23:33:28.667392 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-05-13 23:33:28.667399 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-05-13 23:33:28.667405 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-05-13 23:33:28.667412 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-05-13 23:33:28.667425 | orchestrator | 2025-05-13 23:33:28.667432 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-05-13 23:33:28.667438 | orchestrator | Tuesday 13 May 2025 23:33:10 +0000 (0:00:06.356) 0:00:53.144 *********** 2025-05-13 23:33:28.667444 | orchestrator | ok: [testbed-manager] 2025-05-13 23:33:28.667451 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:33:28.667457 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:33:28.667463 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:33:28.667469 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:33:28.667475 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:33:28.667481 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:33:28.667487 | orchestrator | 2025-05-13 23:33:28.667493 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-05-13 23:33:28.667499 | orchestrator | Tuesday 13 May 2025 23:33:12 +0000 (0:00:01.256) 0:00:54.401 *********** 2025-05-13 23:33:28.667505 | orchestrator | changed: [testbed-manager] 2025-05-13 23:33:28.667511 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:33:28.667525 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:33:28.667533 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:33:28.667539 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:33:28.667545 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:33:28.667552 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:33:28.667559 | orchestrator | 2025-05-13 23:33:28.667565 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-05-13 23:33:28.667571 | orchestrator | Tuesday 13 May 2025 23:33:13 +0000 (0:00:01.406) 0:00:55.807 *********** 2025-05-13 23:33:28.667578 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:33:28.667585 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:33:28.667592 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:33:28.667649 | orchestrator | ok: [testbed-manager] 2025-05-13 23:33:28.667656 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:33:28.667663 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:33:28.667669 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:33:28.667676 | orchestrator | 2025-05-13 23:33:28.667682 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-05-13 23:33:28.667689 | orchestrator | Tuesday 13 May 2025 23:33:15 +0000 (0:00:02.270) 0:00:58.078 *********** 2025-05-13 23:33:28.667696 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:33:28.667703 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:33:28.667709 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:33:28.667716 | orchestrator | ok: [testbed-manager] 2025-05-13 23:33:28.667722 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:33:28.667728 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:33:28.667735 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:33:28.667742 | orchestrator | 2025-05-13 23:33:28.667748 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-05-13 23:33:28.667755 | orchestrator | Tuesday 13 May 2025 23:33:18 +0000 (0:00:02.759) 0:01:00.838 *********** 2025-05-13 23:33:28.667761 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-05-13 23:33:28.667771 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:33:28.667778 | orchestrator | 2025-05-13 23:33:28.667785 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-05-13 23:33:28.667793 | orchestrator | Tuesday 13 May 2025 23:33:20 +0000 (0:00:01.683) 0:01:02.522 *********** 2025-05-13 23:33:28.667800 | orchestrator | changed: [testbed-manager] 2025-05-13 23:33:28.667807 | orchestrator | 2025-05-13 23:33:28.667813 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-05-13 23:33:28.667820 | orchestrator | Tuesday 13 May 2025 23:33:22 +0000 (0:00:02.747) 0:01:05.269 *********** 2025-05-13 23:33:28.667825 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:33:28.667831 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:33:28.667846 | orchestrator | changed: [testbed-manager] 2025-05-13 23:33:28.667852 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:33:28.667858 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:33:28.667864 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:33:28.667869 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:33:28.667875 | orchestrator | 2025-05-13 23:33:28.667882 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 23:33:28.667888 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 23:33:28.667895 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 23:33:28.667902 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 23:33:28.667907 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 23:33:28.667913 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 23:33:28.667919 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 23:33:28.667931 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 23:33:28.667938 | orchestrator | 2025-05-13 23:33:28.667944 | orchestrator | 2025-05-13 23:33:28.667950 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 23:33:28.667956 | orchestrator | Tuesday 13 May 2025 23:33:26 +0000 (0:00:03.756) 0:01:09.025 *********** 2025-05-13 23:33:28.667962 | orchestrator | =============================================================================== 2025-05-13 23:33:28.667969 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 19.54s 2025-05-13 23:33:28.667975 | orchestrator | osism.services.netdata : Add repository --------------------------------- 9.67s 2025-05-13 23:33:28.667981 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 6.36s 2025-05-13 23:33:28.667987 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 4.74s 2025-05-13 23:33:28.667993 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.76s 2025-05-13 23:33:28.667999 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 3.25s 2025-05-13 23:33:28.668006 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.76s 2025-05-13 23:33:28.668022 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.75s 2025-05-13 23:33:28.668029 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.66s 2025-05-13 23:33:28.668036 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 2.27s 2025-05-13 23:33:28.668043 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.18s 2025-05-13 23:33:28.668049 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 2.03s 2025-05-13 23:33:28.668056 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.68s 2025-05-13 23:33:28.668062 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.67s 2025-05-13 23:33:28.668068 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.41s 2025-05-13 23:33:28.668075 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.26s 2025-05-13 23:33:28.668082 | orchestrator | 2025-05-13 23:33:28 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state STARTED 2025-05-13 23:33:28.668089 | orchestrator | 2025-05-13 23:33:28 | INFO  | Task ebfcaffc-5d08-4b09-8f78-b3b1ab507230 is in state SUCCESS 2025-05-13 23:33:28.668102 | orchestrator | 2025-05-13 23:33:28 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:33:28.668109 | orchestrator | 2025-05-13 23:33:28 | INFO  | Task 84fd3660-b731-46fd-82a3-727bf130a991 is in state STARTED 2025-05-13 23:33:28.668115 | orchestrator | 2025-05-13 23:33:28 | INFO  | Task 6574ade5-3f58-402c-8acd-a95c8cafe789 is in state STARTED 2025-05-13 23:33:28.668122 | orchestrator | 2025-05-13 23:33:28 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:33:31.699586 | orchestrator | 2025-05-13 23:33:31 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state STARTED 2025-05-13 23:33:31.701025 | orchestrator | 2025-05-13 23:33:31 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:33:31.702489 | orchestrator | 2025-05-13 23:33:31 | INFO  | Task 84fd3660-b731-46fd-82a3-727bf130a991 is in state STARTED 2025-05-13 23:33:31.703543 | orchestrator | 2025-05-13 23:33:31 | INFO  | Task 6574ade5-3f58-402c-8acd-a95c8cafe789 is in state STARTED 2025-05-13 23:33:31.703580 | orchestrator | 2025-05-13 23:33:31 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:33:34.745361 | orchestrator | 2025-05-13 23:33:34 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state STARTED 2025-05-13 23:33:34.746443 | orchestrator | 2025-05-13 23:33:34 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:33:34.747254 | orchestrator | 2025-05-13 23:33:34 | INFO  | Task 84fd3660-b731-46fd-82a3-727bf130a991 is in state STARTED 2025-05-13 23:33:34.748077 | orchestrator | 2025-05-13 23:33:34 | INFO  | Task 6574ade5-3f58-402c-8acd-a95c8cafe789 is in state STARTED 2025-05-13 23:33:34.748167 | orchestrator | 2025-05-13 23:33:34 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:33:37.789008 | orchestrator | 2025-05-13 23:33:37 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state STARTED 2025-05-13 23:33:37.789705 | orchestrator | 2025-05-13 23:33:37 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:33:37.791411 | orchestrator | 2025-05-13 23:33:37 | INFO  | Task 84fd3660-b731-46fd-82a3-727bf130a991 is in state STARTED 2025-05-13 23:33:37.792300 | orchestrator | 2025-05-13 23:33:37 | INFO  | Task 6574ade5-3f58-402c-8acd-a95c8cafe789 is in state STARTED 2025-05-13 23:33:37.792355 | orchestrator | 2025-05-13 23:33:37 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:33:40.838379 | orchestrator | 2025-05-13 23:33:40 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state STARTED 2025-05-13 23:33:40.841797 | orchestrator | 2025-05-13 23:33:40 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:33:40.844113 | orchestrator | 2025-05-13 23:33:40 | INFO  | Task 84fd3660-b731-46fd-82a3-727bf130a991 is in state STARTED 2025-05-13 23:33:40.845225 | orchestrator | 2025-05-13 23:33:40 | INFO  | Task 6574ade5-3f58-402c-8acd-a95c8cafe789 is in state STARTED 2025-05-13 23:33:40.845440 | orchestrator | 2025-05-13 23:33:40 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:33:43.890835 | orchestrator | 2025-05-13 23:33:43 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state STARTED 2025-05-13 23:33:43.891720 | orchestrator | 2025-05-13 23:33:43 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:33:43.892455 | orchestrator | 2025-05-13 23:33:43 | INFO  | Task 84fd3660-b731-46fd-82a3-727bf130a991 is in state STARTED 2025-05-13 23:33:43.895336 | orchestrator | 2025-05-13 23:33:43 | INFO  | Task 6574ade5-3f58-402c-8acd-a95c8cafe789 is in state STARTED 2025-05-13 23:33:43.895374 | orchestrator | 2025-05-13 23:33:43 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:33:46.947639 | orchestrator | 2025-05-13 23:33:46 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state STARTED 2025-05-13 23:33:46.949187 | orchestrator | 2025-05-13 23:33:46 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:33:46.950260 | orchestrator | 2025-05-13 23:33:46 | INFO  | Task 84fd3660-b731-46fd-82a3-727bf130a991 is in state STARTED 2025-05-13 23:33:46.952456 | orchestrator | 2025-05-13 23:33:46 | INFO  | Task 6574ade5-3f58-402c-8acd-a95c8cafe789 is in state STARTED 2025-05-13 23:33:46.952678 | orchestrator | 2025-05-13 23:33:46 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:33:50.010961 | orchestrator | 2025-05-13 23:33:50 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state STARTED 2025-05-13 23:33:50.016326 | orchestrator | 2025-05-13 23:33:50 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:33:50.017639 | orchestrator | 2025-05-13 23:33:50 | INFO  | Task 84fd3660-b731-46fd-82a3-727bf130a991 is in state STARTED 2025-05-13 23:33:50.020939 | orchestrator | 2025-05-13 23:33:50 | INFO  | Task 6574ade5-3f58-402c-8acd-a95c8cafe789 is in state STARTED 2025-05-13 23:33:50.020991 | orchestrator | 2025-05-13 23:33:50 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:33:53.073172 | orchestrator | 2025-05-13 23:33:53 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state STARTED 2025-05-13 23:33:53.073275 | orchestrator | 2025-05-13 23:33:53 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:33:53.074482 | orchestrator | 2025-05-13 23:33:53 | INFO  | Task 84fd3660-b731-46fd-82a3-727bf130a991 is in state STARTED 2025-05-13 23:33:53.078070 | orchestrator | 2025-05-13 23:33:53 | INFO  | Task 6574ade5-3f58-402c-8acd-a95c8cafe789 is in state STARTED 2025-05-13 23:33:53.078593 | orchestrator | 2025-05-13 23:33:53 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:33:56.145990 | orchestrator | 2025-05-13 23:33:56 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state STARTED 2025-05-13 23:33:56.147173 | orchestrator | 2025-05-13 23:33:56 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:33:56.148464 | orchestrator | 2025-05-13 23:33:56 | INFO  | Task 84fd3660-b731-46fd-82a3-727bf130a991 is in state STARTED 2025-05-13 23:33:56.155020 | orchestrator | 2025-05-13 23:33:56 | INFO  | Task 6574ade5-3f58-402c-8acd-a95c8cafe789 is in state STARTED 2025-05-13 23:33:56.155091 | orchestrator | 2025-05-13 23:33:56 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:33:59.202246 | orchestrator | 2025-05-13 23:33:59 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state STARTED 2025-05-13 23:33:59.203150 | orchestrator | 2025-05-13 23:33:59 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:33:59.204980 | orchestrator | 2025-05-13 23:33:59 | INFO  | Task 84fd3660-b731-46fd-82a3-727bf130a991 is in state STARTED 2025-05-13 23:33:59.206501 | orchestrator | 2025-05-13 23:33:59 | INFO  | Task 6574ade5-3f58-402c-8acd-a95c8cafe789 is in state STARTED 2025-05-13 23:33:59.206559 | orchestrator | 2025-05-13 23:33:59 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:34:02.277772 | orchestrator | 2025-05-13 23:34:02 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state STARTED 2025-05-13 23:34:02.278989 | orchestrator | 2025-05-13 23:34:02 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:34:02.279966 | orchestrator | 2025-05-13 23:34:02 | INFO  | Task 84fd3660-b731-46fd-82a3-727bf130a991 is in state STARTED 2025-05-13 23:34:02.281002 | orchestrator | 2025-05-13 23:34:02 | INFO  | Task 6574ade5-3f58-402c-8acd-a95c8cafe789 is in state STARTED 2025-05-13 23:34:02.281271 | orchestrator | 2025-05-13 23:34:02 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:34:05.342928 | orchestrator | 2025-05-13 23:34:05 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state STARTED 2025-05-13 23:34:05.343154 | orchestrator | 2025-05-13 23:34:05 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:34:05.348021 | orchestrator | 2025-05-13 23:34:05 | INFO  | Task 84fd3660-b731-46fd-82a3-727bf130a991 is in state STARTED 2025-05-13 23:34:05.350601 | orchestrator | 2025-05-13 23:34:05 | INFO  | Task 6574ade5-3f58-402c-8acd-a95c8cafe789 is in state STARTED 2025-05-13 23:34:05.350656 | orchestrator | 2025-05-13 23:34:05 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:34:08.396717 | orchestrator | 2025-05-13 23:34:08 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state STARTED 2025-05-13 23:34:08.398066 | orchestrator | 2025-05-13 23:34:08 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:34:08.398361 | orchestrator | 2025-05-13 23:34:08 | INFO  | Task 84fd3660-b731-46fd-82a3-727bf130a991 is in state STARTED 2025-05-13 23:34:08.400893 | orchestrator | 2025-05-13 23:34:08 | INFO  | Task 6574ade5-3f58-402c-8acd-a95c8cafe789 is in state STARTED 2025-05-13 23:34:08.400943 | orchestrator | 2025-05-13 23:34:08 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:34:11.453569 | orchestrator | 2025-05-13 23:34:11 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state STARTED 2025-05-13 23:34:11.462690 | orchestrator | 2025-05-13 23:34:11 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:34:11.467775 | orchestrator | 2025-05-13 23:34:11 | INFO  | Task 84fd3660-b731-46fd-82a3-727bf130a991 is in state STARTED 2025-05-13 23:34:11.468671 | orchestrator | 2025-05-13 23:34:11 | INFO  | Task 6574ade5-3f58-402c-8acd-a95c8cafe789 is in state SUCCESS 2025-05-13 23:34:11.468836 | orchestrator | 2025-05-13 23:34:11 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:34:14.529939 | orchestrator | 2025-05-13 23:34:14 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state STARTED 2025-05-13 23:34:14.533022 | orchestrator | 2025-05-13 23:34:14 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:34:14.535731 | orchestrator | 2025-05-13 23:34:14 | INFO  | Task 84fd3660-b731-46fd-82a3-727bf130a991 is in state STARTED 2025-05-13 23:34:14.535845 | orchestrator | 2025-05-13 23:34:14 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:34:17.611564 | orchestrator | 2025-05-13 23:34:17 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state STARTED 2025-05-13 23:34:17.615080 | orchestrator | 2025-05-13 23:34:17 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:34:17.620169 | orchestrator | 2025-05-13 23:34:17 | INFO  | Task 84fd3660-b731-46fd-82a3-727bf130a991 is in state STARTED 2025-05-13 23:34:17.621498 | orchestrator | 2025-05-13 23:34:17 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:34:20.672463 | orchestrator | 2025-05-13 23:34:20 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state STARTED 2025-05-13 23:34:20.673309 | orchestrator | 2025-05-13 23:34:20 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:34:20.676396 | orchestrator | 2025-05-13 23:34:20 | INFO  | Task 84fd3660-b731-46fd-82a3-727bf130a991 is in state STARTED 2025-05-13 23:34:20.676455 | orchestrator | 2025-05-13 23:34:20 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:34:23.734136 | orchestrator | 2025-05-13 23:34:23 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state STARTED 2025-05-13 23:34:23.734328 | orchestrator | 2025-05-13 23:34:23 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:34:23.735526 | orchestrator | 2025-05-13 23:34:23 | INFO  | Task 84fd3660-b731-46fd-82a3-727bf130a991 is in state STARTED 2025-05-13 23:34:23.735589 | orchestrator | 2025-05-13 23:34:23 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:34:26.779970 | orchestrator | 2025-05-13 23:34:26 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state STARTED 2025-05-13 23:34:26.781794 | orchestrator | 2025-05-13 23:34:26 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:34:26.782976 | orchestrator | 2025-05-13 23:34:26 | INFO  | Task 84fd3660-b731-46fd-82a3-727bf130a991 is in state STARTED 2025-05-13 23:34:26.783332 | orchestrator | 2025-05-13 23:34:26 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:34:29.833347 | orchestrator | 2025-05-13 23:34:29 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state STARTED 2025-05-13 23:34:29.833804 | orchestrator | 2025-05-13 23:34:29 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:34:29.835222 | orchestrator | 2025-05-13 23:34:29 | INFO  | Task 84fd3660-b731-46fd-82a3-727bf130a991 is in state STARTED 2025-05-13 23:34:29.835261 | orchestrator | 2025-05-13 23:34:29 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:34:32.888553 | orchestrator | 2025-05-13 23:34:32 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state STARTED 2025-05-13 23:34:32.890954 | orchestrator | 2025-05-13 23:34:32 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:34:32.893168 | orchestrator | 2025-05-13 23:34:32 | INFO  | Task 84fd3660-b731-46fd-82a3-727bf130a991 is in state STARTED 2025-05-13 23:34:32.893207 | orchestrator | 2025-05-13 23:34:32 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:34:35.946330 | orchestrator | 2025-05-13 23:34:35 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state STARTED 2025-05-13 23:34:35.946853 | orchestrator | 2025-05-13 23:34:35 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:34:35.949357 | orchestrator | 2025-05-13 23:34:35 | INFO  | Task 84fd3660-b731-46fd-82a3-727bf130a991 is in state STARTED 2025-05-13 23:34:35.949480 | orchestrator | 2025-05-13 23:34:35 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:34:38.992561 | orchestrator | 2025-05-13 23:34:38 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state STARTED 2025-05-13 23:34:38.994807 | orchestrator | 2025-05-13 23:34:38 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:34:38.995724 | orchestrator | 2025-05-13 23:34:38 | INFO  | Task 84fd3660-b731-46fd-82a3-727bf130a991 is in state STARTED 2025-05-13 23:34:38.995785 | orchestrator | 2025-05-13 23:34:38 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:34:42.042007 | orchestrator | 2025-05-13 23:34:42 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state STARTED 2025-05-13 23:34:42.045446 | orchestrator | 2025-05-13 23:34:42 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:34:42.045954 | orchestrator | 2025-05-13 23:34:42 | INFO  | Task 84fd3660-b731-46fd-82a3-727bf130a991 is in state STARTED 2025-05-13 23:34:42.045997 | orchestrator | 2025-05-13 23:34:42 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:34:45.099368 | orchestrator | 2025-05-13 23:34:45 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state STARTED 2025-05-13 23:34:45.103487 | orchestrator | 2025-05-13 23:34:45 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:34:45.104513 | orchestrator | 2025-05-13 23:34:45 | INFO  | Task 84fd3660-b731-46fd-82a3-727bf130a991 is in state STARTED 2025-05-13 23:34:45.104572 | orchestrator | 2025-05-13 23:34:45 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:34:48.150174 | orchestrator | 2025-05-13 23:34:48 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state STARTED 2025-05-13 23:34:48.151258 | orchestrator | 2025-05-13 23:34:48 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:34:48.152342 | orchestrator | 2025-05-13 23:34:48 | INFO  | Task 84fd3660-b731-46fd-82a3-727bf130a991 is in state STARTED 2025-05-13 23:34:48.153042 | orchestrator | 2025-05-13 23:34:48 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:34:51.208034 | orchestrator | 2025-05-13 23:34:51 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state STARTED 2025-05-13 23:34:51.208714 | orchestrator | 2025-05-13 23:34:51 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:34:51.209656 | orchestrator | 2025-05-13 23:34:51 | INFO  | Task 84fd3660-b731-46fd-82a3-727bf130a991 is in state STARTED 2025-05-13 23:34:51.209693 | orchestrator | 2025-05-13 23:34:51 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:34:54.261077 | orchestrator | 2025-05-13 23:34:54 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state STARTED 2025-05-13 23:34:54.262563 | orchestrator | 2025-05-13 23:34:54 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:34:54.268604 | orchestrator | 2025-05-13 23:34:54.268689 | orchestrator | 2025-05-13 23:34:54.268701 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-05-13 23:34:54.268719 | orchestrator | 2025-05-13 23:34:54.268727 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-05-13 23:34:54.268736 | orchestrator | Tuesday 13 May 2025 23:32:29 +0000 (0:00:00.535) 0:00:00.535 *********** 2025-05-13 23:34:54.268744 | orchestrator | ok: [testbed-manager] 2025-05-13 23:34:54.268753 | orchestrator | 2025-05-13 23:34:54.268761 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-05-13 23:34:54.268769 | orchestrator | Tuesday 13 May 2025 23:32:31 +0000 (0:00:01.966) 0:00:02.502 *********** 2025-05-13 23:34:54.268778 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-05-13 23:34:54.268786 | orchestrator | 2025-05-13 23:34:54.268794 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-05-13 23:34:54.268802 | orchestrator | Tuesday 13 May 2025 23:32:32 +0000 (0:00:01.053) 0:00:03.555 *********** 2025-05-13 23:34:54.268810 | orchestrator | changed: [testbed-manager] 2025-05-13 23:34:54.268818 | orchestrator | 2025-05-13 23:34:54.268825 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-05-13 23:34:54.268833 | orchestrator | Tuesday 13 May 2025 23:32:34 +0000 (0:00:01.485) 0:00:05.040 *********** 2025-05-13 23:34:54.268841 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-05-13 23:34:54.268850 | orchestrator | ok: [testbed-manager] 2025-05-13 23:34:54.268857 | orchestrator | 2025-05-13 23:34:54.268865 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-05-13 23:34:54.268873 | orchestrator | Tuesday 13 May 2025 23:34:04 +0000 (0:01:29.984) 0:01:35.025 *********** 2025-05-13 23:34:54.268881 | orchestrator | changed: [testbed-manager] 2025-05-13 23:34:54.268888 | orchestrator | 2025-05-13 23:34:54.268896 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 23:34:54.268929 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 23:34:54.268939 | orchestrator | 2025-05-13 23:34:54.268946 | orchestrator | 2025-05-13 23:34:54.269121 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 23:34:54.269131 | orchestrator | Tuesday 13 May 2025 23:34:07 +0000 (0:00:03.660) 0:01:38.685 *********** 2025-05-13 23:34:54.269138 | orchestrator | =============================================================================== 2025-05-13 23:34:54.269146 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 89.98s 2025-05-13 23:34:54.269154 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 3.66s 2025-05-13 23:34:54.269161 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.97s 2025-05-13 23:34:54.269169 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.48s 2025-05-13 23:34:54.269177 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 1.06s 2025-05-13 23:34:54.269184 | orchestrator | 2025-05-13 23:34:54.269225 | orchestrator | 2025-05-13 23:34:54 | INFO  | Task 84fd3660-b731-46fd-82a3-727bf130a991 is in state SUCCESS 2025-05-13 23:34:54.271004 | orchestrator | 2025-05-13 23:34:54.271049 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-05-13 23:34:54.271058 | orchestrator | 2025-05-13 23:34:54.271066 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-05-13 23:34:54.271074 | orchestrator | Tuesday 13 May 2025 23:32:08 +0000 (0:00:00.262) 0:00:00.262 *********** 2025-05-13 23:34:54.271082 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:34:54.271091 | orchestrator | 2025-05-13 23:34:54.271099 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-05-13 23:34:54.271108 | orchestrator | Tuesday 13 May 2025 23:32:10 +0000 (0:00:01.396) 0:00:01.659 *********** 2025-05-13 23:34:54.271115 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-13 23:34:54.271123 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-13 23:34:54.271131 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-13 23:34:54.271138 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-13 23:34:54.271226 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-13 23:34:54.271242 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-13 23:34:54.271255 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-13 23:34:54.271268 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-13 23:34:54.271281 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-13 23:34:54.271295 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-13 23:34:54.271308 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-13 23:34:54.271316 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-13 23:34:54.271324 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-13 23:34:54.271339 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-13 23:34:54.271347 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-13 23:34:54.271355 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-13 23:34:54.271377 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-13 23:34:54.271385 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-13 23:34:54.271392 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-13 23:34:54.271400 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-13 23:34:54.271407 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-13 23:34:54.271415 | orchestrator | 2025-05-13 23:34:54.271422 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-05-13 23:34:54.271430 | orchestrator | Tuesday 13 May 2025 23:32:15 +0000 (0:00:05.203) 0:00:06.862 *********** 2025-05-13 23:34:54.271438 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:34:54.271448 | orchestrator | 2025-05-13 23:34:54.271455 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-05-13 23:34:54.271463 | orchestrator | Tuesday 13 May 2025 23:32:17 +0000 (0:00:01.725) 0:00:08.588 *********** 2025-05-13 23:34:54.271475 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-13 23:34:54.271488 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-13 23:34:54.271511 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-13 23:34:54.271520 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-13 23:34:54.271529 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-13 23:34:54.271541 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-13 23:34:54.271556 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-13 23:34:54.271565 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:34:54.271574 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:34:54.271589 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:34:54.271600 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:34:54.271610 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:34:54.271647 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:34:54.271660 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:34:54.271670 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:34:54.271692 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:34:54.271700 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:34:54.271714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:34:54.271722 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:34:54.271730 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:34:54.271744 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:34:54.271752 | orchestrator | 2025-05-13 23:34:54.271760 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-05-13 23:34:54.271768 | orchestrator | Tuesday 13 May 2025 23:32:22 +0000 (0:00:04.874) 0:00:13.463 *********** 2025-05-13 23:34:54.271777 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-13 23:34:54.271786 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 23:34:54.271795 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 23:34:54.271803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-13 23:34:54.271827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 23:34:54.271842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 23:34:54.271857 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:34:54.271887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-13 23:34:54.271900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 23:34:54.271909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 23:34:54.271918 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:34:54.271926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-13 23:34:54.271934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 23:34:54.271943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 23:34:54.271951 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:34:54.271964 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-13 23:34:54.271973 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 23:34:54.271987 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 23:34:54.271995 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:34:54.272003 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:34:54.272015 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-13 23:34:54.272024 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 23:34:54.272032 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 23:34:54.272040 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:34:54.272048 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-13 23:34:54.272061 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 23:34:54.272069 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 23:34:54.272083 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:34:54.272091 | orchestrator | 2025-05-13 23:34:54.272099 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-05-13 23:34:54.272107 | orchestrator | Tuesday 13 May 2025 23:32:23 +0000 (0:00:01.301) 0:00:14.764 *********** 2025-05-13 23:34:54.272115 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-13 23:34:54.272131 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 23:34:54.272139 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 23:34:54.272147 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:34:54.272155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-13 23:34:54.272164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 23:34:54.272172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 23:34:54.272180 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:34:54.272199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-13 23:34:54.272208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 23:34:54.272216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 23:34:54.272224 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:34:54.272236 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-13 23:34:54.272245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 23:34:54.272253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 23:34:54.272261 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:34:54.272269 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-13 23:34:54.272560 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 23:34:54.272581 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 23:34:54.272589 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:34:54.272598 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-13 23:34:54.272606 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 23:34:54.272619 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 23:34:54.272628 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:34:54.272655 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-13 23:34:54.272664 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 23:34:54.272673 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 23:34:54.272690 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:34:54.272703 | orchestrator | 2025-05-13 23:34:54.272714 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-05-13 23:34:54.272727 | orchestrator | Tuesday 13 May 2025 23:32:26 +0000 (0:00:02.672) 0:00:17.437 *********** 2025-05-13 23:34:54.272741 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:34:54.272753 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:34:54.272767 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:34:54.272781 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:34:54.272793 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:34:54.272814 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:34:54.272822 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:34:54.272830 | orchestrator | 2025-05-13 23:34:54.272838 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-05-13 23:34:54.272846 | orchestrator | Tuesday 13 May 2025 23:32:26 +0000 (0:00:00.687) 0:00:18.124 *********** 2025-05-13 23:34:54.272854 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:34:54.272862 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:34:54.272869 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:34:54.272877 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:34:54.272884 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:34:54.272892 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:34:54.272900 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:34:54.272907 | orchestrator | 2025-05-13 23:34:54.272915 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-05-13 23:34:54.272923 | orchestrator | Tuesday 13 May 2025 23:32:27 +0000 (0:00:00.888) 0:00:19.012 *********** 2025-05-13 23:34:54.272931 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-13 23:34:54.272940 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-13 23:34:54.272954 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-13 23:34:54.272962 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:34:54.272977 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-13 23:34:54.272985 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:34:54.272998 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-13 23:34:54.273007 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-13 23:34:54.273015 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:34:54.273028 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:34:54.273037 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:34:54.273050 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-13 23:34:54.273058 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:34:54.273066 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:34:54.273080 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:34:54.273088 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:34:54.273096 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:34:54.273105 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:34:54.273113 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:34:54.273126 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:34:54.273135 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:34:54.273143 | orchestrator | 2025-05-13 23:34:54.273151 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-05-13 23:34:54.273164 | orchestrator | Tuesday 13 May 2025 23:32:33 +0000 (0:00:06.254) 0:00:25.266 *********** 2025-05-13 23:34:54.273172 | orchestrator | [WARNING]: Skipped 2025-05-13 23:34:54.273181 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-05-13 23:34:54.273189 | orchestrator | to this access issue: 2025-05-13 23:34:54.273197 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-05-13 23:34:54.273204 | orchestrator | directory 2025-05-13 23:34:54.273212 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-13 23:34:54.273220 | orchestrator | 2025-05-13 23:34:54.273228 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-05-13 23:34:54.273236 | orchestrator | Tuesday 13 May 2025 23:32:35 +0000 (0:00:01.691) 0:00:26.958 *********** 2025-05-13 23:34:54.273244 | orchestrator | [WARNING]: Skipped 2025-05-13 23:34:54.273251 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-05-13 23:34:54.273262 | orchestrator | to this access issue: 2025-05-13 23:34:54.273271 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-05-13 23:34:54.273278 | orchestrator | directory 2025-05-13 23:34:54.273286 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-13 23:34:54.273293 | orchestrator | 2025-05-13 23:34:54.273301 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-05-13 23:34:54.273309 | orchestrator | Tuesday 13 May 2025 23:32:36 +0000 (0:00:01.207) 0:00:28.165 *********** 2025-05-13 23:34:54.273316 | orchestrator | [WARNING]: Skipped 2025-05-13 23:34:54.273324 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-05-13 23:34:54.273332 | orchestrator | to this access issue: 2025-05-13 23:34:54.273340 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-05-13 23:34:54.273347 | orchestrator | directory 2025-05-13 23:34:54.273355 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-13 23:34:54.273362 | orchestrator | 2025-05-13 23:34:54.273370 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-05-13 23:34:54.273378 | orchestrator | Tuesday 13 May 2025 23:32:37 +0000 (0:00:00.771) 0:00:28.937 *********** 2025-05-13 23:34:54.273385 | orchestrator | [WARNING]: Skipped 2025-05-13 23:34:54.273393 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-05-13 23:34:54.273401 | orchestrator | to this access issue: 2025-05-13 23:34:54.273409 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-05-13 23:34:54.273416 | orchestrator | directory 2025-05-13 23:34:54.273436 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-13 23:34:54.273452 | orchestrator | 2025-05-13 23:34:54.273471 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2025-05-13 23:34:54.273484 | orchestrator | Tuesday 13 May 2025 23:32:38 +0000 (0:00:00.844) 0:00:29.781 *********** 2025-05-13 23:34:54.273497 | orchestrator | changed: [testbed-manager] 2025-05-13 23:34:54.273510 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:34:54.273524 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:34:54.273536 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:34:54.273549 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:34:54.273561 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:34:54.273573 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:34:54.273586 | orchestrator | 2025-05-13 23:34:54.273599 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-05-13 23:34:54.273612 | orchestrator | Tuesday 13 May 2025 23:32:42 +0000 (0:00:04.281) 0:00:34.063 *********** 2025-05-13 23:34:54.273625 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-13 23:34:54.273685 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-13 23:34:54.273700 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-13 23:34:54.273713 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-13 23:34:54.273726 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-13 23:34:54.273738 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-13 23:34:54.273753 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-13 23:34:54.273762 | orchestrator | 2025-05-13 23:34:54.273770 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-05-13 23:34:54.273778 | orchestrator | Tuesday 13 May 2025 23:32:45 +0000 (0:00:02.999) 0:00:37.062 *********** 2025-05-13 23:34:54.273785 | orchestrator | changed: [testbed-manager] 2025-05-13 23:34:54.273793 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:34:54.273801 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:34:54.273809 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:34:54.273817 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:34:54.273825 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:34:54.273832 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:34:54.273840 | orchestrator | 2025-05-13 23:34:54.273848 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-05-13 23:34:54.273856 | orchestrator | Tuesday 13 May 2025 23:32:49 +0000 (0:00:03.532) 0:00:40.594 *********** 2025-05-13 23:34:54.273864 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-13 23:34:54.273881 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 23:34:54.273897 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-13 23:34:54.273906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 23:34:54.273914 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-13 23:34:54.273930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 23:34:54.273939 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:34:54.273948 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:34:54.273956 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:34:54.273964 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-13 23:34:54.273982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 23:34:54.273990 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:34:54.273999 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-13 23:34:54.274011 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 23:34:54.274074 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-13 23:34:54.274083 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 23:34:54.274091 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:34:54.274112 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-13 23:34:54.274121 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 23:34:54.274129 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:34:54.274137 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:34:54.274146 | orchestrator | 2025-05-13 23:34:54.274154 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-05-13 23:34:54.274162 | orchestrator | Tuesday 13 May 2025 23:32:52 +0000 (0:00:03.584) 0:00:44.179 *********** 2025-05-13 23:34:54.274173 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-13 23:34:54.274181 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-13 23:34:54.274189 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-13 23:34:54.274197 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-13 23:34:54.274205 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-13 23:34:54.274213 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-13 23:34:54.274221 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-13 23:34:54.274228 | orchestrator | 2025-05-13 23:34:54.274236 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-05-13 23:34:54.274244 | orchestrator | Tuesday 13 May 2025 23:32:55 +0000 (0:00:03.131) 0:00:47.311 *********** 2025-05-13 23:34:54.274251 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-13 23:34:54.274259 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-13 23:34:54.274267 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-13 23:34:54.274275 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-13 23:34:54.274282 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-13 23:34:54.274295 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-13 23:34:54.274303 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-13 23:34:54.274311 | orchestrator | 2025-05-13 23:34:54.274318 | orchestrator | TASK [common : Check common containers] **************************************** 2025-05-13 23:34:54.274326 | orchestrator | Tuesday 13 May 2025 23:32:58 +0000 (0:00:02.291) 0:00:49.602 *********** 2025-05-13 23:34:54.274334 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-13 23:34:54.274348 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-13 23:34:54.274357 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-13 23:34:54.274365 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-13 23:34:54.274377 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-13 23:34:54.274385 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:34:54.274394 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:34:54.274407 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-13 23:34:54.274420 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:34:54.274429 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:34:54.274437 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-13 23:34:54.274449 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:34:54.274458 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:34:54.274467 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:34:54.274480 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:34:54.274489 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:34:54.274508 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:34:54.274516 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:34:54.274525 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:34:54.274534 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:34:54.274546 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:34:54.274554 | orchestrator | 2025-05-13 23:34:54.274562 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-05-13 23:34:54.274570 | orchestrator | Tuesday 13 May 2025 23:33:02 +0000 (0:00:03.857) 0:00:53.459 *********** 2025-05-13 23:34:54.274578 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:34:54.274591 | orchestrator | changed: [testbed-manager] 2025-05-13 23:34:54.274598 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:34:54.274606 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:34:54.274614 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:34:54.274622 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:34:54.274651 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:34:54.274668 | orchestrator | 2025-05-13 23:34:54.274676 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-05-13 23:34:54.274685 | orchestrator | Tuesday 13 May 2025 23:33:04 +0000 (0:00:02.100) 0:00:55.560 *********** 2025-05-13 23:34:54.274692 | orchestrator | changed: [testbed-manager] 2025-05-13 23:34:54.274700 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:34:54.274708 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:34:54.274716 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:34:54.274723 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:34:54.274731 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:34:54.274739 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:34:54.274746 | orchestrator | 2025-05-13 23:34:54.274754 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-13 23:34:54.274762 | orchestrator | Tuesday 13 May 2025 23:33:05 +0000 (0:00:01.494) 0:00:57.055 *********** 2025-05-13 23:34:54.274770 | orchestrator | 2025-05-13 23:34:54.274778 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-13 23:34:54.274786 | orchestrator | Tuesday 13 May 2025 23:33:05 +0000 (0:00:00.094) 0:00:57.149 *********** 2025-05-13 23:34:54.274794 | orchestrator | 2025-05-13 23:34:54.274802 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-13 23:34:54.274809 | orchestrator | Tuesday 13 May 2025 23:33:05 +0000 (0:00:00.092) 0:00:57.241 *********** 2025-05-13 23:34:54.274817 | orchestrator | 2025-05-13 23:34:54.274825 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-13 23:34:54.274833 | orchestrator | Tuesday 13 May 2025 23:33:06 +0000 (0:00:00.256) 0:00:57.497 *********** 2025-05-13 23:34:54.274840 | orchestrator | 2025-05-13 23:34:54.274848 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-13 23:34:54.274856 | orchestrator | Tuesday 13 May 2025 23:33:06 +0000 (0:00:00.066) 0:00:57.564 *********** 2025-05-13 23:34:54.274863 | orchestrator | 2025-05-13 23:34:54.274871 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-13 23:34:54.274879 | orchestrator | Tuesday 13 May 2025 23:33:06 +0000 (0:00:00.074) 0:00:57.638 *********** 2025-05-13 23:34:54.274887 | orchestrator | 2025-05-13 23:34:54.274894 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-13 23:34:54.274902 | orchestrator | Tuesday 13 May 2025 23:33:06 +0000 (0:00:00.083) 0:00:57.722 *********** 2025-05-13 23:34:54.274910 | orchestrator | 2025-05-13 23:34:54.274917 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-05-13 23:34:54.274930 | orchestrator | Tuesday 13 May 2025 23:33:06 +0000 (0:00:00.116) 0:00:57.839 *********** 2025-05-13 23:34:54.274938 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:34:54.274946 | orchestrator | changed: [testbed-manager] 2025-05-13 23:34:54.274954 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:34:54.274963 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:34:54.274970 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:34:54.274978 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:34:54.274986 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:34:54.274994 | orchestrator | 2025-05-13 23:34:54.275001 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-05-13 23:34:54.275009 | orchestrator | Tuesday 13 May 2025 23:33:53 +0000 (0:00:47.456) 0:01:45.295 *********** 2025-05-13 23:34:54.275017 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:34:54.275025 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:34:54.275033 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:34:54.275040 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:34:54.275053 | orchestrator | changed: [testbed-manager] 2025-05-13 23:34:54.275061 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:34:54.275069 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:34:54.275077 | orchestrator | 2025-05-13 23:34:54.275085 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-05-13 23:34:54.275093 | orchestrator | Tuesday 13 May 2025 23:34:41 +0000 (0:00:47.682) 0:02:32.978 *********** 2025-05-13 23:34:54.275101 | orchestrator | ok: [testbed-manager] 2025-05-13 23:34:54.275109 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:34:54.275117 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:34:54.275125 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:34:54.275132 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:34:54.275140 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:34:54.275148 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:34:54.275156 | orchestrator | 2025-05-13 23:34:54.275164 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-05-13 23:34:54.275171 | orchestrator | Tuesday 13 May 2025 23:34:43 +0000 (0:00:02.006) 0:02:34.985 *********** 2025-05-13 23:34:54.275179 | orchestrator | changed: [testbed-manager] 2025-05-13 23:34:54.275187 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:34:54.275195 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:34:54.275203 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:34:54.275211 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:34:54.275219 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:34:54.275227 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:34:54.275234 | orchestrator | 2025-05-13 23:34:54.275243 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 23:34:54.275252 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-05-13 23:34:54.275260 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-05-13 23:34:54.275269 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-05-13 23:34:54.275277 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-05-13 23:34:54.275285 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-05-13 23:34:54.275293 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-05-13 23:34:54.275301 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-05-13 23:34:54.275309 | orchestrator | 2025-05-13 23:34:54.275316 | orchestrator | 2025-05-13 23:34:54.275324 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 23:34:54.275332 | orchestrator | Tuesday 13 May 2025 23:34:53 +0000 (0:00:09.547) 0:02:44.532 *********** 2025-05-13 23:34:54.275340 | orchestrator | =============================================================================== 2025-05-13 23:34:54.275348 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 47.68s 2025-05-13 23:34:54.275356 | orchestrator | common : Restart fluentd container ------------------------------------- 47.46s 2025-05-13 23:34:54.275363 | orchestrator | common : Restart cron container ----------------------------------------- 9.55s 2025-05-13 23:34:54.275371 | orchestrator | common : Copying over config.json files for services -------------------- 6.25s 2025-05-13 23:34:54.275379 | orchestrator | common : Ensuring config directories exist ------------------------------ 5.20s 2025-05-13 23:34:54.275386 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.87s 2025-05-13 23:34:54.275399 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 4.28s 2025-05-13 23:34:54.275407 | orchestrator | common : Check common containers ---------------------------------------- 3.86s 2025-05-13 23:34:54.275414 | orchestrator | common : Ensuring config directories have correct owner and permission --- 3.58s 2025-05-13 23:34:54.275422 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.53s 2025-05-13 23:34:54.275430 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.13s 2025-05-13 23:34:54.275437 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.00s 2025-05-13 23:34:54.275445 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.67s 2025-05-13 23:34:54.275453 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.29s 2025-05-13 23:34:54.275465 | orchestrator | common : Creating log volume -------------------------------------------- 2.10s 2025-05-13 23:34:54.275473 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.01s 2025-05-13 23:34:54.275481 | orchestrator | common : include_tasks -------------------------------------------------- 1.73s 2025-05-13 23:34:54.275488 | orchestrator | common : Find custom fluentd input config files ------------------------- 1.69s 2025-05-13 23:34:54.275496 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.50s 2025-05-13 23:34:54.275504 | orchestrator | common : include_tasks -------------------------------------------------- 1.40s 2025-05-13 23:34:54.275512 | orchestrator | 2025-05-13 23:34:54 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:34:57.332455 | orchestrator | 2025-05-13 23:34:57 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state STARTED 2025-05-13 23:34:57.332997 | orchestrator | 2025-05-13 23:34:57 | INFO  | Task facf79d2-ea2c-4cd8-91d6-9794cdaf7657 is in state STARTED 2025-05-13 23:34:57.337628 | orchestrator | 2025-05-13 23:34:57 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:34:57.337738 | orchestrator | 2025-05-13 23:34:57 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:34:57.337753 | orchestrator | 2025-05-13 23:34:57 | INFO  | Task 873605d2-2d40-4828-92ce-06d604785102 is in state STARTED 2025-05-13 23:34:57.337787 | orchestrator | 2025-05-13 23:34:57 | INFO  | Task 85140e50-2c66-490e-9b21-b44d8bf9e9be is in state STARTED 2025-05-13 23:34:57.337799 | orchestrator | 2025-05-13 23:34:57 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:35:00.371934 | orchestrator | 2025-05-13 23:35:00 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state STARTED 2025-05-13 23:35:00.373158 | orchestrator | 2025-05-13 23:35:00 | INFO  | Task facf79d2-ea2c-4cd8-91d6-9794cdaf7657 is in state STARTED 2025-05-13 23:35:00.373220 | orchestrator | 2025-05-13 23:35:00 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:35:00.377440 | orchestrator | 2025-05-13 23:35:00 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:35:00.378058 | orchestrator | 2025-05-13 23:35:00 | INFO  | Task 873605d2-2d40-4828-92ce-06d604785102 is in state STARTED 2025-05-13 23:35:00.388270 | orchestrator | 2025-05-13 23:35:00 | INFO  | Task 85140e50-2c66-490e-9b21-b44d8bf9e9be is in state STARTED 2025-05-13 23:35:00.388306 | orchestrator | 2025-05-13 23:35:00 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:35:03.472230 | orchestrator | 2025-05-13 23:35:03 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state STARTED 2025-05-13 23:35:03.475558 | orchestrator | 2025-05-13 23:35:03 | INFO  | Task facf79d2-ea2c-4cd8-91d6-9794cdaf7657 is in state STARTED 2025-05-13 23:35:03.479001 | orchestrator | 2025-05-13 23:35:03 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:35:03.483073 | orchestrator | 2025-05-13 23:35:03 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:35:03.483126 | orchestrator | 2025-05-13 23:35:03 | INFO  | Task 873605d2-2d40-4828-92ce-06d604785102 is in state STARTED 2025-05-13 23:35:03.484946 | orchestrator | 2025-05-13 23:35:03 | INFO  | Task 85140e50-2c66-490e-9b21-b44d8bf9e9be is in state STARTED 2025-05-13 23:35:03.484990 | orchestrator | 2025-05-13 23:35:03 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:35:06.519772 | orchestrator | 2025-05-13 23:35:06 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state STARTED 2025-05-13 23:35:06.521981 | orchestrator | 2025-05-13 23:35:06 | INFO  | Task facf79d2-ea2c-4cd8-91d6-9794cdaf7657 is in state STARTED 2025-05-13 23:35:06.522936 | orchestrator | 2025-05-13 23:35:06 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:35:06.525595 | orchestrator | 2025-05-13 23:35:06 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:35:06.525630 | orchestrator | 2025-05-13 23:35:06 | INFO  | Task 873605d2-2d40-4828-92ce-06d604785102 is in state STARTED 2025-05-13 23:35:06.526567 | orchestrator | 2025-05-13 23:35:06 | INFO  | Task 85140e50-2c66-490e-9b21-b44d8bf9e9be is in state STARTED 2025-05-13 23:35:06.526721 | orchestrator | 2025-05-13 23:35:06 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:35:09.571533 | orchestrator | 2025-05-13 23:35:09 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state STARTED 2025-05-13 23:35:09.571625 | orchestrator | 2025-05-13 23:35:09 | INFO  | Task facf79d2-ea2c-4cd8-91d6-9794cdaf7657 is in state STARTED 2025-05-13 23:35:09.572099 | orchestrator | 2025-05-13 23:35:09 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:35:09.572900 | orchestrator | 2025-05-13 23:35:09 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:35:09.573973 | orchestrator | 2025-05-13 23:35:09 | INFO  | Task 873605d2-2d40-4828-92ce-06d604785102 is in state STARTED 2025-05-13 23:35:09.574721 | orchestrator | 2025-05-13 23:35:09 | INFO  | Task 85140e50-2c66-490e-9b21-b44d8bf9e9be is in state STARTED 2025-05-13 23:35:09.575047 | orchestrator | 2025-05-13 23:35:09 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:35:12.628031 | orchestrator | 2025-05-13 23:35:12 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state STARTED 2025-05-13 23:35:12.628158 | orchestrator | 2025-05-13 23:35:12 | INFO  | Task facf79d2-ea2c-4cd8-91d6-9794cdaf7657 is in state STARTED 2025-05-13 23:35:12.631998 | orchestrator | 2025-05-13 23:35:12 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:35:12.638473 | orchestrator | 2025-05-13 23:35:12 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:35:12.638974 | orchestrator | 2025-05-13 23:35:12 | INFO  | Task 873605d2-2d40-4828-92ce-06d604785102 is in state STARTED 2025-05-13 23:35:12.639596 | orchestrator | 2025-05-13 23:35:12 | INFO  | Task 85140e50-2c66-490e-9b21-b44d8bf9e9be is in state STARTED 2025-05-13 23:35:12.639608 | orchestrator | 2025-05-13 23:35:12 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:35:15.686975 | orchestrator | 2025-05-13 23:35:15 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state STARTED 2025-05-13 23:35:15.687239 | orchestrator | 2025-05-13 23:35:15 | INFO  | Task facf79d2-ea2c-4cd8-91d6-9794cdaf7657 is in state STARTED 2025-05-13 23:35:15.688178 | orchestrator | 2025-05-13 23:35:15 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:35:15.688958 | orchestrator | 2025-05-13 23:35:15 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:35:15.691515 | orchestrator | 2025-05-13 23:35:15 | INFO  | Task 873605d2-2d40-4828-92ce-06d604785102 is in state STARTED 2025-05-13 23:35:15.691840 | orchestrator | 2025-05-13 23:35:15 | INFO  | Task 85140e50-2c66-490e-9b21-b44d8bf9e9be is in state SUCCESS 2025-05-13 23:35:15.693105 | orchestrator | 2025-05-13 23:35:15 | INFO  | Task 2ba0f24e-ffde-40ca-8de4-8585eab5387a is in state STARTED 2025-05-13 23:35:15.693145 | orchestrator | 2025-05-13 23:35:15 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:35:18.745263 | orchestrator | 2025-05-13 23:35:18 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state STARTED 2025-05-13 23:35:18.748708 | orchestrator | 2025-05-13 23:35:18 | INFO  | Task facf79d2-ea2c-4cd8-91d6-9794cdaf7657 is in state STARTED 2025-05-13 23:35:18.748779 | orchestrator | 2025-05-13 23:35:18 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:35:18.748788 | orchestrator | 2025-05-13 23:35:18 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:35:18.749775 | orchestrator | 2025-05-13 23:35:18 | INFO  | Task 873605d2-2d40-4828-92ce-06d604785102 is in state STARTED 2025-05-13 23:35:18.749789 | orchestrator | 2025-05-13 23:35:18 | INFO  | Task 2ba0f24e-ffde-40ca-8de4-8585eab5387a is in state STARTED 2025-05-13 23:35:18.749852 | orchestrator | 2025-05-13 23:35:18 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:35:21.795932 | orchestrator | 2025-05-13 23:35:21 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state STARTED 2025-05-13 23:35:21.797947 | orchestrator | 2025-05-13 23:35:21 | INFO  | Task facf79d2-ea2c-4cd8-91d6-9794cdaf7657 is in state STARTED 2025-05-13 23:35:21.799808 | orchestrator | 2025-05-13 23:35:21 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:35:21.802378 | orchestrator | 2025-05-13 23:35:21 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:35:21.803268 | orchestrator | 2025-05-13 23:35:21 | INFO  | Task 873605d2-2d40-4828-92ce-06d604785102 is in state STARTED 2025-05-13 23:35:21.804275 | orchestrator | 2025-05-13 23:35:21 | INFO  | Task 2ba0f24e-ffde-40ca-8de4-8585eab5387a is in state STARTED 2025-05-13 23:35:21.805782 | orchestrator | 2025-05-13 23:35:21 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:35:24.871110 | orchestrator | 2025-05-13 23:35:24 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state STARTED 2025-05-13 23:35:24.872203 | orchestrator | 2025-05-13 23:35:24 | INFO  | Task facf79d2-ea2c-4cd8-91d6-9794cdaf7657 is in state STARTED 2025-05-13 23:35:24.875354 | orchestrator | 2025-05-13 23:35:24 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:35:24.876028 | orchestrator | 2025-05-13 23:35:24 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:35:24.877272 | orchestrator | 2025-05-13 23:35:24 | INFO  | Task 873605d2-2d40-4828-92ce-06d604785102 is in state SUCCESS 2025-05-13 23:35:24.879008 | orchestrator | 2025-05-13 23:35:24.879031 | orchestrator | 2025-05-13 23:35:24.879036 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-13 23:35:24.879041 | orchestrator | 2025-05-13 23:35:24.879045 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-13 23:35:24.879049 | orchestrator | Tuesday 13 May 2025 23:35:01 +0000 (0:00:00.573) 0:00:00.573 *********** 2025-05-13 23:35:24.879054 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:35:24.879059 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:35:24.879063 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:35:24.879080 | orchestrator | 2025-05-13 23:35:24.879084 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-13 23:35:24.879088 | orchestrator | Tuesday 13 May 2025 23:35:01 +0000 (0:00:00.812) 0:00:01.385 *********** 2025-05-13 23:35:24.879093 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-05-13 23:35:24.879097 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-05-13 23:35:24.879101 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-05-13 23:35:24.879105 | orchestrator | 2025-05-13 23:35:24.879109 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-05-13 23:35:24.879113 | orchestrator | 2025-05-13 23:35:24.879117 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-05-13 23:35:24.879121 | orchestrator | Tuesday 13 May 2025 23:35:02 +0000 (0:00:00.642) 0:00:02.028 *********** 2025-05-13 23:35:24.879125 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:35:24.879130 | orchestrator | 2025-05-13 23:35:24.879134 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-05-13 23:35:24.879138 | orchestrator | Tuesday 13 May 2025 23:35:03 +0000 (0:00:00.724) 0:00:02.752 *********** 2025-05-13 23:35:24.879142 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-05-13 23:35:24.879146 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-05-13 23:35:24.879153 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-05-13 23:35:24.879157 | orchestrator | 2025-05-13 23:35:24.879161 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-05-13 23:35:24.879165 | orchestrator | Tuesday 13 May 2025 23:35:04 +0000 (0:00:00.887) 0:00:03.640 *********** 2025-05-13 23:35:24.879169 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-05-13 23:35:24.879173 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-05-13 23:35:24.879177 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-05-13 23:35:24.879180 | orchestrator | 2025-05-13 23:35:24.879184 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-05-13 23:35:24.879188 | orchestrator | Tuesday 13 May 2025 23:35:06 +0000 (0:00:02.840) 0:00:06.481 *********** 2025-05-13 23:35:24.879192 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:35:24.879196 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:35:24.879199 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:35:24.879203 | orchestrator | 2025-05-13 23:35:24.879207 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-05-13 23:35:24.879210 | orchestrator | Tuesday 13 May 2025 23:35:09 +0000 (0:00:02.739) 0:00:09.221 *********** 2025-05-13 23:35:24.879214 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:35:24.879218 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:35:24.879222 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:35:24.879225 | orchestrator | 2025-05-13 23:35:24.879229 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 23:35:24.879233 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 23:35:24.879239 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 23:35:24.879242 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 23:35:24.879246 | orchestrator | 2025-05-13 23:35:24.879250 | orchestrator | 2025-05-13 23:35:24.879253 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 23:35:24.879257 | orchestrator | Tuesday 13 May 2025 23:35:13 +0000 (0:00:03.876) 0:00:13.098 *********** 2025-05-13 23:35:24.879261 | orchestrator | =============================================================================== 2025-05-13 23:35:24.879268 | orchestrator | memcached : Restart memcached container --------------------------------- 3.88s 2025-05-13 23:35:24.879272 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.84s 2025-05-13 23:35:24.879276 | orchestrator | memcached : Check memcached container ----------------------------------- 2.74s 2025-05-13 23:35:24.879280 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.89s 2025-05-13 23:35:24.879283 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.81s 2025-05-13 23:35:24.879288 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.72s 2025-05-13 23:35:24.879292 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.64s 2025-05-13 23:35:24.879295 | orchestrator | 2025-05-13 23:35:24.879299 | orchestrator | 2025-05-13 23:35:24.879303 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-13 23:35:24.879306 | orchestrator | 2025-05-13 23:35:24.879310 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-13 23:35:24.879314 | orchestrator | Tuesday 13 May 2025 23:35:01 +0000 (0:00:00.793) 0:00:00.793 *********** 2025-05-13 23:35:24.879317 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:35:24.879321 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:35:24.879325 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:35:24.879329 | orchestrator | 2025-05-13 23:35:24.879333 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-13 23:35:24.879343 | orchestrator | Tuesday 13 May 2025 23:35:02 +0000 (0:00:00.458) 0:00:01.251 *********** 2025-05-13 23:35:24.879347 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-05-13 23:35:24.879351 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-05-13 23:35:24.879355 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-05-13 23:35:24.879359 | orchestrator | 2025-05-13 23:35:24.879362 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-05-13 23:35:24.879366 | orchestrator | 2025-05-13 23:35:24.879370 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-05-13 23:35:24.879374 | orchestrator | Tuesday 13 May 2025 23:35:02 +0000 (0:00:00.613) 0:00:01.864 *********** 2025-05-13 23:35:24.879377 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:35:24.879381 | orchestrator | 2025-05-13 23:35:24.879385 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-05-13 23:35:24.879389 | orchestrator | Tuesday 13 May 2025 23:35:03 +0000 (0:00:00.900) 0:00:02.765 *********** 2025-05-13 23:35:24.879395 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-13 23:35:24.879404 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-13 23:35:24.879408 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-13 23:35:24.879415 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-13 23:35:24.879419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-13 23:35:24.879427 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-13 23:35:24.879431 | orchestrator | 2025-05-13 23:35:24.879435 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-05-13 23:35:24.879439 | orchestrator | Tuesday 13 May 2025 23:35:05 +0000 (0:00:01.498) 0:00:04.264 *********** 2025-05-13 23:35:24.879443 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-13 23:35:24.879449 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-13 23:35:24.879453 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-13 23:35:24.879460 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-13 23:35:24.879464 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-13 23:35:24.879470 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-13 23:35:24.879474 | orchestrator | 2025-05-13 23:35:24.879478 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-05-13 23:35:24.879482 | orchestrator | Tuesday 13 May 2025 23:35:08 +0000 (0:00:03.154) 0:00:07.419 *********** 2025-05-13 23:35:24.879486 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-13 23:35:24.879492 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-13 23:35:24.879496 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-13 23:35:24.879502 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-13 23:35:24.879506 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-13 23:35:24.879510 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-13 23:35:24.879514 | orchestrator | 2025-05-13 23:35:24.879520 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-05-13 23:35:24.879524 | orchestrator | Tuesday 13 May 2025 23:35:11 +0000 (0:00:03.330) 0:00:10.749 *********** 2025-05-13 23:35:24.879528 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-13 23:35:24.879532 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-13 23:35:24.879538 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-13 23:35:24.879544 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-13 23:35:24.879548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-13 23:35:24.879552 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-13 23:35:24.879556 | orchestrator | 2025-05-13 23:35:24.879560 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-05-13 23:35:24.879564 | orchestrator | Tuesday 13 May 2025 23:35:13 +0000 (0:00:01.786) 0:00:12.537 *********** 2025-05-13 23:35:24.879568 | orchestrator | 2025-05-13 23:35:24.879572 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-05-13 23:35:24.879577 | orchestrator | Tuesday 13 May 2025 23:35:13 +0000 (0:00:00.081) 0:00:12.619 *********** 2025-05-13 23:35:24.879581 | orchestrator | 2025-05-13 23:35:24.879585 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-05-13 23:35:24.879588 | orchestrator | Tuesday 13 May 2025 23:35:13 +0000 (0:00:00.071) 0:00:12.690 *********** 2025-05-13 23:35:24.879592 | orchestrator | 2025-05-13 23:35:24.879596 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-05-13 23:35:24.879600 | orchestrator | Tuesday 13 May 2025 23:35:13 +0000 (0:00:00.071) 0:00:12.762 *********** 2025-05-13 23:35:24.879603 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:35:24.879607 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:35:24.879611 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:35:24.879615 | orchestrator | 2025-05-13 23:35:24.879619 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-05-13 23:35:24.879623 | orchestrator | Tuesday 13 May 2025 23:35:18 +0000 (0:00:05.023) 0:00:17.785 *********** 2025-05-13 23:35:24.879627 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:35:24.879634 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:35:24.879638 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:35:24.879692 | orchestrator | 2025-05-13 23:35:24.879698 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 23:35:24.879704 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 23:35:24.879710 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 23:35:24.879716 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 23:35:24.879722 | orchestrator | 2025-05-13 23:35:24.879728 | orchestrator | 2025-05-13 23:35:24.879739 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 23:35:24.879745 | orchestrator | Tuesday 13 May 2025 23:35:22 +0000 (0:00:03.631) 0:00:21.417 *********** 2025-05-13 23:35:24.879751 | orchestrator | =============================================================================== 2025-05-13 23:35:24.879757 | orchestrator | redis : Restart redis container ----------------------------------------- 5.02s 2025-05-13 23:35:24.879763 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 3.63s 2025-05-13 23:35:24.879769 | orchestrator | redis : Copying over redis config files --------------------------------- 3.33s 2025-05-13 23:35:24.879775 | orchestrator | redis : Copying over default config.json files -------------------------- 3.15s 2025-05-13 23:35:24.879781 | orchestrator | redis : Check redis containers ------------------------------------------ 1.79s 2025-05-13 23:35:24.879787 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.50s 2025-05-13 23:35:24.879793 | orchestrator | redis : include_tasks --------------------------------------------------- 0.90s 2025-05-13 23:35:24.879800 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.61s 2025-05-13 23:35:24.879805 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.46s 2025-05-13 23:35:24.879810 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.22s 2025-05-13 23:35:24.879833 | orchestrator | 2025-05-13 23:35:24 | INFO  | Task 2ba0f24e-ffde-40ca-8de4-8585eab5387a is in state STARTED 2025-05-13 23:35:24.879838 | orchestrator | 2025-05-13 23:35:24 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:35:27.939640 | orchestrator | 2025-05-13 23:35:27 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state STARTED 2025-05-13 23:35:27.940880 | orchestrator | 2025-05-13 23:35:27 | INFO  | Task facf79d2-ea2c-4cd8-91d6-9794cdaf7657 is in state STARTED 2025-05-13 23:35:27.941956 | orchestrator | 2025-05-13 23:35:27 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:35:27.944308 | orchestrator | 2025-05-13 23:35:27 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:35:27.945384 | orchestrator | 2025-05-13 23:35:27 | INFO  | Task 2ba0f24e-ffde-40ca-8de4-8585eab5387a is in state STARTED 2025-05-13 23:35:27.945460 | orchestrator | 2025-05-13 23:35:27 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:35:30.993899 | orchestrator | 2025-05-13 23:35:30 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state STARTED 2025-05-13 23:35:30.996276 | orchestrator | 2025-05-13 23:35:30 | INFO  | Task facf79d2-ea2c-4cd8-91d6-9794cdaf7657 is in state STARTED 2025-05-13 23:35:30.996984 | orchestrator | 2025-05-13 23:35:30 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:35:30.997931 | orchestrator | 2025-05-13 23:35:30 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:35:30.998885 | orchestrator | 2025-05-13 23:35:30 | INFO  | Task 2ba0f24e-ffde-40ca-8de4-8585eab5387a is in state STARTED 2025-05-13 23:35:30.999015 | orchestrator | 2025-05-13 23:35:30 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:35:34.045742 | orchestrator | 2025-05-13 23:35:34 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state STARTED 2025-05-13 23:35:34.047024 | orchestrator | 2025-05-13 23:35:34 | INFO  | Task facf79d2-ea2c-4cd8-91d6-9794cdaf7657 is in state STARTED 2025-05-13 23:35:34.050096 | orchestrator | 2025-05-13 23:35:34 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:35:34.052131 | orchestrator | 2025-05-13 23:35:34 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:35:34.053943 | orchestrator | 2025-05-13 23:35:34 | INFO  | Task 2ba0f24e-ffde-40ca-8de4-8585eab5387a is in state STARTED 2025-05-13 23:35:34.053985 | orchestrator | 2025-05-13 23:35:34 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:35:37.093112 | orchestrator | 2025-05-13 23:35:37 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state STARTED 2025-05-13 23:35:37.093937 | orchestrator | 2025-05-13 23:35:37 | INFO  | Task facf79d2-ea2c-4cd8-91d6-9794cdaf7657 is in state STARTED 2025-05-13 23:35:37.098624 | orchestrator | 2025-05-13 23:35:37 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:35:37.098702 | orchestrator | 2025-05-13 23:35:37 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:35:37.099206 | orchestrator | 2025-05-13 23:35:37 | INFO  | Task 2ba0f24e-ffde-40ca-8de4-8585eab5387a is in state STARTED 2025-05-13 23:35:37.099406 | orchestrator | 2025-05-13 23:35:37 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:35:40.158518 | orchestrator | 2025-05-13 23:35:40 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state STARTED 2025-05-13 23:35:40.160585 | orchestrator | 2025-05-13 23:35:40 | INFO  | Task facf79d2-ea2c-4cd8-91d6-9794cdaf7657 is in state STARTED 2025-05-13 23:35:40.161263 | orchestrator | 2025-05-13 23:35:40 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:35:40.161972 | orchestrator | 2025-05-13 23:35:40 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:35:40.163931 | orchestrator | 2025-05-13 23:35:40 | INFO  | Task 2ba0f24e-ffde-40ca-8de4-8585eab5387a is in state STARTED 2025-05-13 23:35:40.164320 | orchestrator | 2025-05-13 23:35:40 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:35:43.206143 | orchestrator | 2025-05-13 23:35:43 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state STARTED 2025-05-13 23:35:43.209304 | orchestrator | 2025-05-13 23:35:43 | INFO  | Task facf79d2-ea2c-4cd8-91d6-9794cdaf7657 is in state STARTED 2025-05-13 23:35:43.210165 | orchestrator | 2025-05-13 23:35:43 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:35:43.211075 | orchestrator | 2025-05-13 23:35:43 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:35:43.212090 | orchestrator | 2025-05-13 23:35:43 | INFO  | Task 2ba0f24e-ffde-40ca-8de4-8585eab5387a is in state STARTED 2025-05-13 23:35:43.212200 | orchestrator | 2025-05-13 23:35:43 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:35:46.254253 | orchestrator | 2025-05-13 23:35:46 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state STARTED 2025-05-13 23:35:46.254605 | orchestrator | 2025-05-13 23:35:46 | INFO  | Task facf79d2-ea2c-4cd8-91d6-9794cdaf7657 is in state STARTED 2025-05-13 23:35:46.257018 | orchestrator | 2025-05-13 23:35:46 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:35:46.257936 | orchestrator | 2025-05-13 23:35:46 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:35:46.258935 | orchestrator | 2025-05-13 23:35:46 | INFO  | Task 2ba0f24e-ffde-40ca-8de4-8585eab5387a is in state STARTED 2025-05-13 23:35:46.258957 | orchestrator | 2025-05-13 23:35:46 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:35:49.317833 | orchestrator | 2025-05-13 23:35:49 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state STARTED 2025-05-13 23:35:49.318350 | orchestrator | 2025-05-13 23:35:49 | INFO  | Task facf79d2-ea2c-4cd8-91d6-9794cdaf7657 is in state STARTED 2025-05-13 23:35:49.320894 | orchestrator | 2025-05-13 23:35:49 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:35:49.324214 | orchestrator | 2025-05-13 23:35:49 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:35:49.326553 | orchestrator | 2025-05-13 23:35:49 | INFO  | Task 2ba0f24e-ffde-40ca-8de4-8585eab5387a is in state STARTED 2025-05-13 23:35:49.326600 | orchestrator | 2025-05-13 23:35:49 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:35:52.385968 | orchestrator | 2025-05-13 23:35:52 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state STARTED 2025-05-13 23:35:52.386865 | orchestrator | 2025-05-13 23:35:52 | INFO  | Task facf79d2-ea2c-4cd8-91d6-9794cdaf7657 is in state STARTED 2025-05-13 23:35:52.387951 | orchestrator | 2025-05-13 23:35:52 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:35:52.389648 | orchestrator | 2025-05-13 23:35:52 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:35:52.390971 | orchestrator | 2025-05-13 23:35:52 | INFO  | Task 2ba0f24e-ffde-40ca-8de4-8585eab5387a is in state STARTED 2025-05-13 23:35:52.391034 | orchestrator | 2025-05-13 23:35:52 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:35:55.438351 | orchestrator | 2025-05-13 23:35:55 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state STARTED 2025-05-13 23:35:55.438821 | orchestrator | 2025-05-13 23:35:55 | INFO  | Task facf79d2-ea2c-4cd8-91d6-9794cdaf7657 is in state STARTED 2025-05-13 23:35:55.440538 | orchestrator | 2025-05-13 23:35:55 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:35:55.442495 | orchestrator | 2025-05-13 23:35:55 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:35:55.443254 | orchestrator | 2025-05-13 23:35:55 | INFO  | Task 2ba0f24e-ffde-40ca-8de4-8585eab5387a is in state STARTED 2025-05-13 23:35:55.443294 | orchestrator | 2025-05-13 23:35:55 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:35:58.482883 | orchestrator | 2025-05-13 23:35:58 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state STARTED 2025-05-13 23:35:58.489778 | orchestrator | 2025-05-13 23:35:58 | INFO  | Task facf79d2-ea2c-4cd8-91d6-9794cdaf7657 is in state STARTED 2025-05-13 23:35:58.490259 | orchestrator | 2025-05-13 23:35:58 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:35:58.490929 | orchestrator | 2025-05-13 23:35:58 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:35:58.491933 | orchestrator | 2025-05-13 23:35:58 | INFO  | Task 2ba0f24e-ffde-40ca-8de4-8585eab5387a is in state STARTED 2025-05-13 23:35:58.491974 | orchestrator | 2025-05-13 23:35:58 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:36:01.536008 | orchestrator | 2025-05-13 23:36:01 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state STARTED 2025-05-13 23:36:01.536179 | orchestrator | 2025-05-13 23:36:01 | INFO  | Task facf79d2-ea2c-4cd8-91d6-9794cdaf7657 is in state STARTED 2025-05-13 23:36:01.536288 | orchestrator | 2025-05-13 23:36:01 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:36:01.538548 | orchestrator | 2025-05-13 23:36:01 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:36:01.542003 | orchestrator | 2025-05-13 23:36:01 | INFO  | Task 2ba0f24e-ffde-40ca-8de4-8585eab5387a is in state STARTED 2025-05-13 23:36:01.542138 | orchestrator | 2025-05-13 23:36:01 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:36:04.580049 | orchestrator | 2025-05-13 23:36:04 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state STARTED 2025-05-13 23:36:04.580942 | orchestrator | 2025-05-13 23:36:04 | INFO  | Task facf79d2-ea2c-4cd8-91d6-9794cdaf7657 is in state STARTED 2025-05-13 23:36:04.586578 | orchestrator | 2025-05-13 23:36:04 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:36:04.590471 | orchestrator | 2025-05-13 23:36:04 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:36:04.593441 | orchestrator | 2025-05-13 23:36:04 | INFO  | Task 2ba0f24e-ffde-40ca-8de4-8585eab5387a is in state STARTED 2025-05-13 23:36:04.593496 | orchestrator | 2025-05-13 23:36:04 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:36:07.655834 | orchestrator | 2025-05-13 23:36:07 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state STARTED 2025-05-13 23:36:07.658928 | orchestrator | 2025-05-13 23:36:07 | INFO  | Task facf79d2-ea2c-4cd8-91d6-9794cdaf7657 is in state STARTED 2025-05-13 23:36:07.660646 | orchestrator | 2025-05-13 23:36:07 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:36:07.661958 | orchestrator | 2025-05-13 23:36:07 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:36:07.663117 | orchestrator | 2025-05-13 23:36:07 | INFO  | Task 2ba0f24e-ffde-40ca-8de4-8585eab5387a is in state STARTED 2025-05-13 23:36:07.663175 | orchestrator | 2025-05-13 23:36:07 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:36:10.722413 | orchestrator | 2025-05-13 23:36:10 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state STARTED 2025-05-13 23:36:10.722936 | orchestrator | 2025-05-13 23:36:10 | INFO  | Task facf79d2-ea2c-4cd8-91d6-9794cdaf7657 is in state STARTED 2025-05-13 23:36:10.723866 | orchestrator | 2025-05-13 23:36:10 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:36:10.728166 | orchestrator | 2025-05-13 23:36:10 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:36:10.729003 | orchestrator | 2025-05-13 23:36:10 | INFO  | Task 2ba0f24e-ffde-40ca-8de4-8585eab5387a is in state STARTED 2025-05-13 23:36:10.730768 | orchestrator | 2025-05-13 23:36:10 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:36:13.768896 | orchestrator | 2025-05-13 23:36:13 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state STARTED 2025-05-13 23:36:13.769470 | orchestrator | 2025-05-13 23:36:13 | INFO  | Task facf79d2-ea2c-4cd8-91d6-9794cdaf7657 is in state SUCCESS 2025-05-13 23:36:13.770396 | orchestrator | 2025-05-13 23:36:13.770438 | orchestrator | 2025-05-13 23:36:13.770451 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-13 23:36:13.770463 | orchestrator | 2025-05-13 23:36:13.770474 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-13 23:36:13.770485 | orchestrator | Tuesday 13 May 2025 23:35:01 +0000 (0:00:00.496) 0:00:00.496 *********** 2025-05-13 23:36:13.770496 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:36:13.770535 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:36:13.770546 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:36:13.770557 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:36:13.770568 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:36:13.770593 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:36:13.770604 | orchestrator | 2025-05-13 23:36:13.770615 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-13 23:36:13.770626 | orchestrator | Tuesday 13 May 2025 23:35:02 +0000 (0:00:01.465) 0:00:01.962 *********** 2025-05-13 23:36:13.770636 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-13 23:36:13.770647 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-13 23:36:13.770704 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-13 23:36:13.770716 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-13 23:36:13.770726 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-13 23:36:13.770737 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-13 23:36:13.770748 | orchestrator | 2025-05-13 23:36:13.770758 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-05-13 23:36:13.770769 | orchestrator | 2025-05-13 23:36:13.770780 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-05-13 23:36:13.770795 | orchestrator | Tuesday 13 May 2025 23:35:03 +0000 (0:00:01.033) 0:00:02.996 *********** 2025-05-13 23:36:13.770808 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:36:13.770821 | orchestrator | 2025-05-13 23:36:13.770832 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-05-13 23:36:13.770842 | orchestrator | Tuesday 13 May 2025 23:35:04 +0000 (0:00:01.390) 0:00:04.386 *********** 2025-05-13 23:36:13.770853 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-05-13 23:36:13.770864 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-05-13 23:36:13.770875 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-05-13 23:36:13.770885 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-05-13 23:36:13.770895 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-05-13 23:36:13.770906 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-05-13 23:36:13.770916 | orchestrator | 2025-05-13 23:36:13.770926 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-05-13 23:36:13.770937 | orchestrator | Tuesday 13 May 2025 23:35:06 +0000 (0:00:01.642) 0:00:06.029 *********** 2025-05-13 23:36:13.770948 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-05-13 23:36:13.770958 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-05-13 23:36:13.770968 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-05-13 23:36:13.770979 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-05-13 23:36:13.770990 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-05-13 23:36:13.771002 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-05-13 23:36:13.771014 | orchestrator | 2025-05-13 23:36:13.771026 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-05-13 23:36:13.771038 | orchestrator | Tuesday 13 May 2025 23:35:09 +0000 (0:00:02.579) 0:00:08.608 *********** 2025-05-13 23:36:13.771050 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-05-13 23:36:13.771062 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:36:13.771076 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-05-13 23:36:13.771088 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:36:13.771101 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-05-13 23:36:13.771121 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:36:13.771133 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-05-13 23:36:13.771145 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:36:13.771157 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-05-13 23:36:13.771168 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:36:13.771179 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-05-13 23:36:13.771190 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:36:13.771201 | orchestrator | 2025-05-13 23:36:13.771212 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-05-13 23:36:13.771222 | orchestrator | Tuesday 13 May 2025 23:35:11 +0000 (0:00:02.136) 0:00:10.745 *********** 2025-05-13 23:36:13.771233 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:36:13.771243 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:36:13.771254 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:36:13.771264 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:36:13.771274 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:36:13.771285 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:36:13.771295 | orchestrator | 2025-05-13 23:36:13.771306 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-05-13 23:36:13.771316 | orchestrator | Tuesday 13 May 2025 23:35:12 +0000 (0:00:00.951) 0:00:11.696 *********** 2025-05-13 23:36:13.771354 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-13 23:36:13.771371 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-13 23:36:13.771383 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-13 23:36:13.771395 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-13 23:36:13.771414 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-13 23:36:13.771426 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-13 23:36:13.771449 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-13 23:36:13.771461 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-13 23:36:13.771473 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-13 23:36:13.771485 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-13 23:36:13.771502 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-13 23:36:13.771519 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-13 23:36:13.771531 | orchestrator | 2025-05-13 23:36:13.771542 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-05-13 23:36:13.771553 | orchestrator | Tuesday 13 May 2025 23:35:14 +0000 (0:00:01.893) 0:00:13.590 *********** 2025-05-13 23:36:13.771570 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-13 23:36:13.771582 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-13 23:36:13.771594 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-13 23:36:13.771612 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-13 23:36:13.771623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-13 23:36:13.771642 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-13 23:36:13.771677 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-13 23:36:13.771690 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-13 23:36:13.771702 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-13 23:36:13.771720 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-13 23:36:13.771731 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-13 23:36:13.771751 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-13 23:36:13.771763 | orchestrator | 2025-05-13 23:36:13.771778 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-05-13 23:36:13.771789 | orchestrator | Tuesday 13 May 2025 23:35:18 +0000 (0:00:04.423) 0:00:18.014 *********** 2025-05-13 23:36:13.771800 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:36:13.771811 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:36:13.771821 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:36:13.771832 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:36:13.771842 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:36:13.771853 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:36:13.771863 | orchestrator | 2025-05-13 23:36:13.771874 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-05-13 23:36:13.771885 | orchestrator | Tuesday 13 May 2025 23:35:20 +0000 (0:00:01.580) 0:00:19.594 *********** 2025-05-13 23:36:13.771897 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-13 23:36:13.771915 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-13 23:36:13.771927 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-13 23:36:13.771939 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-13 23:36:13.771961 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-13 23:36:13.771973 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-13 23:36:13.771985 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-13 23:36:13.772008 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-13 23:36:13.772019 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-13 23:36:13.772030 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-13 23:36:13.772053 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-13 23:36:13.772066 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-13 23:36:13.772083 | orchestrator | 2025-05-13 23:36:13.772094 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-13 23:36:13.772105 | orchestrator | Tuesday 13 May 2025 23:35:22 +0000 (0:00:02.767) 0:00:22.361 *********** 2025-05-13 23:36:13.772115 | orchestrator | 2025-05-13 23:36:13.772126 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-13 23:36:13.772137 | orchestrator | Tuesday 13 May 2025 23:35:23 +0000 (0:00:00.286) 0:00:22.648 *********** 2025-05-13 23:36:13.772147 | orchestrator | 2025-05-13 23:36:13.772158 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-13 23:36:13.772168 | orchestrator | Tuesday 13 May 2025 23:35:23 +0000 (0:00:00.142) 0:00:22.791 *********** 2025-05-13 23:36:13.772178 | orchestrator | 2025-05-13 23:36:13.772189 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-13 23:36:13.772199 | orchestrator | Tuesday 13 May 2025 23:35:23 +0000 (0:00:00.159) 0:00:22.950 *********** 2025-05-13 23:36:13.772210 | orchestrator | 2025-05-13 23:36:13.772220 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-13 23:36:13.772230 | orchestrator | Tuesday 13 May 2025 23:35:23 +0000 (0:00:00.167) 0:00:23.118 *********** 2025-05-13 23:36:13.772241 | orchestrator | 2025-05-13 23:36:13.772251 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-13 23:36:13.772261 | orchestrator | Tuesday 13 May 2025 23:35:23 +0000 (0:00:00.217) 0:00:23.336 *********** 2025-05-13 23:36:13.772272 | orchestrator | 2025-05-13 23:36:13.772283 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-05-13 23:36:13.772293 | orchestrator | Tuesday 13 May 2025 23:35:24 +0000 (0:00:00.352) 0:00:23.689 *********** 2025-05-13 23:36:13.772304 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:36:13.772315 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:36:13.772325 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:36:13.772336 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:36:13.772346 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:36:13.772357 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:36:13.772367 | orchestrator | 2025-05-13 23:36:13.772378 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-05-13 23:36:13.772388 | orchestrator | Tuesday 13 May 2025 23:35:35 +0000 (0:00:11.239) 0:00:34.929 *********** 2025-05-13 23:36:13.772398 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:36:13.772409 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:36:13.772419 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:36:13.772430 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:36:13.772440 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:36:13.772450 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:36:13.772460 | orchestrator | 2025-05-13 23:36:13.772471 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-05-13 23:36:13.772481 | orchestrator | Tuesday 13 May 2025 23:35:37 +0000 (0:00:01.894) 0:00:36.829 *********** 2025-05-13 23:36:13.772492 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:36:13.772502 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:36:13.772513 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:36:13.772523 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:36:13.772533 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:36:13.772544 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:36:13.772554 | orchestrator | 2025-05-13 23:36:13.772565 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-05-13 23:36:13.772575 | orchestrator | Tuesday 13 May 2025 23:35:47 +0000 (0:00:09.686) 0:00:46.515 *********** 2025-05-13 23:36:13.772586 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-05-13 23:36:13.772597 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-05-13 23:36:13.772614 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-05-13 23:36:13.772624 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-05-13 23:36:13.772635 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-05-13 23:36:13.772794 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-05-13 23:36:13.772810 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-05-13 23:36:13.772821 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-05-13 23:36:13.772837 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-05-13 23:36:13.772848 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-05-13 23:36:13.772859 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-05-13 23:36:13.772869 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-05-13 23:36:13.772880 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-13 23:36:13.772891 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-13 23:36:13.772901 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-13 23:36:13.772912 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-13 23:36:13.772922 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-13 23:36:13.772933 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-13 23:36:13.772943 | orchestrator | 2025-05-13 23:36:13.772954 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-05-13 23:36:13.772965 | orchestrator | Tuesday 13 May 2025 23:35:55 +0000 (0:00:08.019) 0:00:54.535 *********** 2025-05-13 23:36:13.772975 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-05-13 23:36:13.772986 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:36:13.772997 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-05-13 23:36:13.773008 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:36:13.773018 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-05-13 23:36:13.773029 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:36:13.773040 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-05-13 23:36:13.773051 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-05-13 23:36:13.773061 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-05-13 23:36:13.773072 | orchestrator | 2025-05-13 23:36:13.773082 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-05-13 23:36:13.773093 | orchestrator | Tuesday 13 May 2025 23:35:57 +0000 (0:00:02.555) 0:00:57.091 *********** 2025-05-13 23:36:13.773103 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-05-13 23:36:13.773114 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:36:13.773125 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-05-13 23:36:13.773135 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:36:13.773146 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-05-13 23:36:13.773165 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:36:13.773176 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-05-13 23:36:13.773186 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-05-13 23:36:13.773197 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-05-13 23:36:13.773207 | orchestrator | 2025-05-13 23:36:13.773218 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-05-13 23:36:13.773229 | orchestrator | Tuesday 13 May 2025 23:36:03 +0000 (0:00:05.450) 0:01:02.541 *********** 2025-05-13 23:36:13.773239 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:36:13.773250 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:36:13.773261 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:36:13.773271 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:36:13.773282 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:36:13.773292 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:36:13.773302 | orchestrator | 2025-05-13 23:36:13.773313 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 23:36:13.773324 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-13 23:36:13.773336 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-13 23:36:13.773347 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-13 23:36:13.773358 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-13 23:36:13.773369 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-13 23:36:13.773386 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-13 23:36:13.773397 | orchestrator | 2025-05-13 23:36:13.773408 | orchestrator | 2025-05-13 23:36:13.773419 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 23:36:13.773431 | orchestrator | Tuesday 13 May 2025 23:36:11 +0000 (0:00:08.798) 0:01:11.339 *********** 2025-05-13 23:36:13.773443 | orchestrator | =============================================================================== 2025-05-13 23:36:13.773460 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 18.48s 2025-05-13 23:36:13.773473 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 11.24s 2025-05-13 23:36:13.773484 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 8.02s 2025-05-13 23:36:13.773496 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 5.45s 2025-05-13 23:36:13.773508 | orchestrator | openvswitch : Copying over config.json files for services --------------- 4.42s 2025-05-13 23:36:13.773520 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.77s 2025-05-13 23:36:13.773531 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.58s 2025-05-13 23:36:13.773543 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.56s 2025-05-13 23:36:13.773554 | orchestrator | module-load : Drop module persistence ----------------------------------- 2.14s 2025-05-13 23:36:13.773566 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.90s 2025-05-13 23:36:13.773578 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.89s 2025-05-13 23:36:13.773590 | orchestrator | module-load : Load modules ---------------------------------------------- 1.64s 2025-05-13 23:36:13.773602 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.58s 2025-05-13 23:36:13.773621 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.47s 2025-05-13 23:36:13.773633 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.39s 2025-05-13 23:36:13.773644 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.33s 2025-05-13 23:36:13.773654 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.03s 2025-05-13 23:36:13.773685 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.95s 2025-05-13 23:36:13.773718 | orchestrator | 2025-05-13 23:36:13 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:36:13.773730 | orchestrator | 2025-05-13 23:36:13 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:36:13.777035 | orchestrator | 2025-05-13 23:36:13 | INFO  | Task bd4c2ed2-1b3d-48bb-863b-655b659cf7e5 is in state STARTED 2025-05-13 23:36:13.777490 | orchestrator | 2025-05-13 23:36:13 | INFO  | Task 2ba0f24e-ffde-40ca-8de4-8585eab5387a is in state STARTED 2025-05-13 23:36:13.777757 | orchestrator | 2025-05-13 23:36:13 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:36:16.819584 | orchestrator | 2025-05-13 23:36:16 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state STARTED 2025-05-13 23:36:16.819938 | orchestrator | 2025-05-13 23:36:16 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:36:16.820395 | orchestrator | 2025-05-13 23:36:16 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:36:16.821091 | orchestrator | 2025-05-13 23:36:16 | INFO  | Task bd4c2ed2-1b3d-48bb-863b-655b659cf7e5 is in state STARTED 2025-05-13 23:36:16.822137 | orchestrator | 2025-05-13 23:36:16 | INFO  | Task 2ba0f24e-ffde-40ca-8de4-8585eab5387a is in state STARTED 2025-05-13 23:36:16.822204 | orchestrator | 2025-05-13 23:36:16 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:36:19.850238 | orchestrator | 2025-05-13 23:36:19 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state STARTED 2025-05-13 23:36:19.850434 | orchestrator | 2025-05-13 23:36:19 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:36:19.850844 | orchestrator | 2025-05-13 23:36:19 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:36:19.851348 | orchestrator | 2025-05-13 23:36:19 | INFO  | Task bd4c2ed2-1b3d-48bb-863b-655b659cf7e5 is in state STARTED 2025-05-13 23:36:19.851935 | orchestrator | 2025-05-13 23:36:19 | INFO  | Task 2ba0f24e-ffde-40ca-8de4-8585eab5387a is in state STARTED 2025-05-13 23:36:19.852078 | orchestrator | 2025-05-13 23:36:19 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:36:22.882904 | orchestrator | 2025-05-13 23:36:22 | INFO  | Task ff97e140-4646-4b4b-8615-cf5eb0c732bc is in state SUCCESS 2025-05-13 23:36:22.883936 | orchestrator | 2025-05-13 23:36:22.883986 | orchestrator | 2025-05-13 23:36:22.884006 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-05-13 23:36:22.884025 | orchestrator | 2025-05-13 23:36:22.884043 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-05-13 23:36:22.884061 | orchestrator | Tuesday 13 May 2025 23:32:09 +0000 (0:00:00.266) 0:00:00.266 *********** 2025-05-13 23:36:22.884080 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:36:22.884102 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:36:22.884122 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:36:22.884141 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:36:22.884159 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:36:22.884178 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:36:22.884198 | orchestrator | 2025-05-13 23:36:22.884217 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-05-13 23:36:22.884262 | orchestrator | Tuesday 13 May 2025 23:32:10 +0000 (0:00:00.843) 0:00:01.110 *********** 2025-05-13 23:36:22.884274 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:36:22.884287 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:36:22.884297 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:36:22.884307 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:36:22.884318 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:36:22.884329 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:36:22.884339 | orchestrator | 2025-05-13 23:36:22.884350 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-05-13 23:36:22.884361 | orchestrator | Tuesday 13 May 2025 23:32:11 +0000 (0:00:00.827) 0:00:01.938 *********** 2025-05-13 23:36:22.884372 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:36:22.884383 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:36:22.884410 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:36:22.884421 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:36:22.884432 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:36:22.884442 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:36:22.884452 | orchestrator | 2025-05-13 23:36:22.884463 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-05-13 23:36:22.884474 | orchestrator | Tuesday 13 May 2025 23:32:12 +0000 (0:00:00.932) 0:00:02.871 *********** 2025-05-13 23:36:22.884485 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:36:22.884495 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:36:22.884506 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:36:22.884516 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:36:22.884527 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:36:22.884541 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:36:22.884553 | orchestrator | 2025-05-13 23:36:22.884566 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-05-13 23:36:22.884578 | orchestrator | Tuesday 13 May 2025 23:32:14 +0000 (0:00:02.193) 0:00:05.064 *********** 2025-05-13 23:36:22.884590 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:36:22.884602 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:36:22.884614 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:36:22.884626 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:36:22.884638 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:36:22.884650 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:36:22.884723 | orchestrator | 2025-05-13 23:36:22.884747 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-05-13 23:36:22.884766 | orchestrator | Tuesday 13 May 2025 23:32:15 +0000 (0:00:01.288) 0:00:06.352 *********** 2025-05-13 23:36:22.884785 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:36:22.884797 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:36:22.884809 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:36:22.884821 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:36:22.884833 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:36:22.884845 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:36:22.884857 | orchestrator | 2025-05-13 23:36:22.884869 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-05-13 23:36:22.884881 | orchestrator | Tuesday 13 May 2025 23:32:16 +0000 (0:00:01.148) 0:00:07.500 *********** 2025-05-13 23:36:22.884892 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:36:22.884902 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:36:22.884913 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:36:22.884924 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:36:22.884934 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:36:22.884944 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:36:22.884955 | orchestrator | 2025-05-13 23:36:22.884966 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-05-13 23:36:22.884976 | orchestrator | Tuesday 13 May 2025 23:32:17 +0000 (0:00:00.644) 0:00:08.145 *********** 2025-05-13 23:36:22.884987 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:36:22.884998 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:36:22.885023 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:36:22.885034 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:36:22.885045 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:36:22.885055 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:36:22.885066 | orchestrator | 2025-05-13 23:36:22.885077 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-05-13 23:36:22.885087 | orchestrator | Tuesday 13 May 2025 23:32:18 +0000 (0:00:00.617) 0:00:08.763 *********** 2025-05-13 23:36:22.885098 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-13 23:36:22.885109 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-13 23:36:22.885120 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:36:22.885131 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-13 23:36:22.885141 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-13 23:36:22.885152 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:36:22.885163 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-13 23:36:22.885173 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-13 23:36:22.885184 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:36:22.885195 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-13 23:36:22.885223 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-13 23:36:22.885235 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:36:22.885246 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-13 23:36:22.885257 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-13 23:36:22.885267 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:36:22.885278 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-13 23:36:22.885289 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-13 23:36:22.885299 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:36:22.885310 | orchestrator | 2025-05-13 23:36:22.885320 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-05-13 23:36:22.885338 | orchestrator | Tuesday 13 May 2025 23:32:18 +0000 (0:00:00.750) 0:00:09.514 *********** 2025-05-13 23:36:22.885349 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:36:22.885359 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:36:22.885370 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:36:22.885381 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:36:22.885391 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:36:22.885402 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:36:22.885412 | orchestrator | 2025-05-13 23:36:22.885423 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-05-13 23:36:22.885435 | orchestrator | Tuesday 13 May 2025 23:32:19 +0000 (0:00:01.102) 0:00:10.616 *********** 2025-05-13 23:36:22.885445 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:36:22.885456 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:36:22.885467 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:36:22.885477 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:36:22.885488 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:36:22.885498 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:36:22.885508 | orchestrator | 2025-05-13 23:36:22.885519 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-05-13 23:36:22.885530 | orchestrator | Tuesday 13 May 2025 23:32:20 +0000 (0:00:00.824) 0:00:11.440 *********** 2025-05-13 23:36:22.885540 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:36:22.885551 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:36:22.885562 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:36:22.885572 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:36:22.885590 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:36:22.885600 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:36:22.885611 | orchestrator | 2025-05-13 23:36:22.885622 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-05-13 23:36:22.885641 | orchestrator | Tuesday 13 May 2025 23:32:27 +0000 (0:00:06.757) 0:00:18.198 *********** 2025-05-13 23:36:22.885660 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:36:22.885708 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:36:22.885725 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:36:22.885742 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:36:22.885759 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:36:22.885776 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:36:22.885793 | orchestrator | 2025-05-13 23:36:22.885809 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-05-13 23:36:22.885826 | orchestrator | Tuesday 13 May 2025 23:32:28 +0000 (0:00:01.209) 0:00:19.407 *********** 2025-05-13 23:36:22.885843 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:36:22.885860 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:36:22.885878 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:36:22.885894 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:36:22.885913 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:36:22.885931 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:36:22.885949 | orchestrator | 2025-05-13 23:36:22.885966 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-05-13 23:36:22.885989 | orchestrator | Tuesday 13 May 2025 23:32:31 +0000 (0:00:02.375) 0:00:21.783 *********** 2025-05-13 23:36:22.886006 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:36:22.886143 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:36:22.886165 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:36:22.886183 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:36:22.886195 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:36:22.886205 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:36:22.886216 | orchestrator | 2025-05-13 23:36:22.886227 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-05-13 23:36:22.886238 | orchestrator | Tuesday 13 May 2025 23:32:31 +0000 (0:00:00.773) 0:00:22.556 *********** 2025-05-13 23:36:22.886248 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2025-05-13 23:36:22.886260 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2025-05-13 23:36:22.886270 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:36:22.886281 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2025-05-13 23:36:22.886292 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2025-05-13 23:36:22.886302 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:36:22.886313 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2025-05-13 23:36:22.886323 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2025-05-13 23:36:22.886334 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:36:22.886345 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2025-05-13 23:36:22.886355 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2025-05-13 23:36:22.886366 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:36:22.886376 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2025-05-13 23:36:22.886387 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2025-05-13 23:36:22.886398 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:36:22.886408 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2025-05-13 23:36:22.886419 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2025-05-13 23:36:22.886430 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:36:22.886440 | orchestrator | 2025-05-13 23:36:22.886451 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-05-13 23:36:22.886473 | orchestrator | Tuesday 13 May 2025 23:32:32 +0000 (0:00:00.994) 0:00:23.551 *********** 2025-05-13 23:36:22.886498 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:36:22.886509 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:36:22.886520 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:36:22.886530 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:36:22.886541 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:36:22.886552 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:36:22.886562 | orchestrator | 2025-05-13 23:36:22.886573 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-05-13 23:36:22.886589 | orchestrator | 2025-05-13 23:36:22.886608 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-05-13 23:36:22.886626 | orchestrator | Tuesday 13 May 2025 23:32:33 +0000 (0:00:00.983) 0:00:24.534 *********** 2025-05-13 23:36:22.886644 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:36:22.886727 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:36:22.886761 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:36:22.886783 | orchestrator | 2025-05-13 23:36:22.886803 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-05-13 23:36:22.886823 | orchestrator | Tuesday 13 May 2025 23:32:35 +0000 (0:00:01.337) 0:00:25.872 *********** 2025-05-13 23:36:22.886842 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:36:22.886862 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:36:22.886881 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:36:22.886897 | orchestrator | 2025-05-13 23:36:22.886915 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-05-13 23:36:22.886934 | orchestrator | Tuesday 13 May 2025 23:32:36 +0000 (0:00:01.362) 0:00:27.234 *********** 2025-05-13 23:36:22.886951 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:36:22.886969 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:36:22.886986 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:36:22.887003 | orchestrator | 2025-05-13 23:36:22.887021 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-05-13 23:36:22.887039 | orchestrator | Tuesday 13 May 2025 23:32:37 +0000 (0:00:01.066) 0:00:28.301 *********** 2025-05-13 23:36:22.887058 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:36:22.887075 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:36:22.887089 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:36:22.887098 | orchestrator | 2025-05-13 23:36:22.887108 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-05-13 23:36:22.887118 | orchestrator | Tuesday 13 May 2025 23:32:38 +0000 (0:00:00.826) 0:00:29.127 *********** 2025-05-13 23:36:22.887127 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:36:22.887137 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:36:22.887146 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:36:22.887156 | orchestrator | 2025-05-13 23:36:22.887166 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-05-13 23:36:22.887175 | orchestrator | Tuesday 13 May 2025 23:32:38 +0000 (0:00:00.416) 0:00:29.543 *********** 2025-05-13 23:36:22.887185 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:36:22.887194 | orchestrator | 2025-05-13 23:36:22.887204 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-05-13 23:36:22.887213 | orchestrator | Tuesday 13 May 2025 23:32:39 +0000 (0:00:00.790) 0:00:30.334 *********** 2025-05-13 23:36:22.887223 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:36:22.887232 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:36:22.887241 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:36:22.887250 | orchestrator | 2025-05-13 23:36:22.887260 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-05-13 23:36:22.887270 | orchestrator | Tuesday 13 May 2025 23:32:41 +0000 (0:00:02.230) 0:00:32.565 *********** 2025-05-13 23:36:22.887279 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:36:22.887289 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:36:22.887298 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:36:22.887308 | orchestrator | 2025-05-13 23:36:22.887317 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-05-13 23:36:22.887338 | orchestrator | Tuesday 13 May 2025 23:32:42 +0000 (0:00:00.650) 0:00:33.215 *********** 2025-05-13 23:36:22.887347 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:36:22.887357 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:36:22.887366 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:36:22.887375 | orchestrator | 2025-05-13 23:36:22.887385 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-05-13 23:36:22.887394 | orchestrator | Tuesday 13 May 2025 23:32:43 +0000 (0:00:00.737) 0:00:33.953 *********** 2025-05-13 23:36:22.887404 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:36:22.887413 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:36:22.887423 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:36:22.887432 | orchestrator | 2025-05-13 23:36:22.887442 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-05-13 23:36:22.887451 | orchestrator | Tuesday 13 May 2025 23:32:45 +0000 (0:00:02.487) 0:00:36.440 *********** 2025-05-13 23:36:22.887464 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:36:22.887481 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:36:22.887498 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:36:22.887513 | orchestrator | 2025-05-13 23:36:22.887529 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-05-13 23:36:22.887540 | orchestrator | Tuesday 13 May 2025 23:32:46 +0000 (0:00:00.414) 0:00:36.855 *********** 2025-05-13 23:36:22.887555 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:36:22.887580 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:36:22.887599 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:36:22.887615 | orchestrator | 2025-05-13 23:36:22.887630 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-05-13 23:36:22.887645 | orchestrator | Tuesday 13 May 2025 23:32:46 +0000 (0:00:00.497) 0:00:37.353 *********** 2025-05-13 23:36:22.887660 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:36:22.887702 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:36:22.887718 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:36:22.887733 | orchestrator | 2025-05-13 23:36:22.887750 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-05-13 23:36:22.887767 | orchestrator | Tuesday 13 May 2025 23:32:49 +0000 (0:00:02.530) 0:00:39.884 *********** 2025-05-13 23:36:22.887798 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-05-13 23:36:22.887812 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-05-13 23:36:22.887823 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-05-13 23:36:22.887832 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-05-13 23:36:22.887850 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-05-13 23:36:22.887860 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-05-13 23:36:22.887869 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-05-13 23:36:22.887879 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-05-13 23:36:22.887888 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-05-13 23:36:22.887898 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-05-13 23:36:22.887917 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-05-13 23:36:22.887926 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-05-13 23:36:22.887936 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:36:22.887946 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:36:22.887955 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:36:22.887965 | orchestrator | 2025-05-13 23:36:22.887974 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-05-13 23:36:22.887984 | orchestrator | Tuesday 13 May 2025 23:33:34 +0000 (0:00:45.823) 0:01:25.708 *********** 2025-05-13 23:36:22.887993 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:36:22.888003 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:36:22.888012 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:36:22.888021 | orchestrator | 2025-05-13 23:36:22.888031 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-05-13 23:36:22.888041 | orchestrator | Tuesday 13 May 2025 23:33:35 +0000 (0:00:00.333) 0:01:26.041 *********** 2025-05-13 23:36:22.888050 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:36:22.888060 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:36:22.888070 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:36:22.888079 | orchestrator | 2025-05-13 23:36:22.888088 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-05-13 23:36:22.888098 | orchestrator | Tuesday 13 May 2025 23:33:36 +0000 (0:00:01.020) 0:01:27.062 *********** 2025-05-13 23:36:22.888107 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:36:22.888117 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:36:22.888126 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:36:22.888136 | orchestrator | 2025-05-13 23:36:22.888145 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-05-13 23:36:22.888155 | orchestrator | Tuesday 13 May 2025 23:33:37 +0000 (0:00:01.353) 0:01:28.415 *********** 2025-05-13 23:36:22.888164 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:36:22.888174 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:36:22.888183 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:36:22.888192 | orchestrator | 2025-05-13 23:36:22.888202 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-05-13 23:36:22.888212 | orchestrator | Tuesday 13 May 2025 23:33:50 +0000 (0:00:13.016) 0:01:41.432 *********** 2025-05-13 23:36:22.888221 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:36:22.888231 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:36:22.888240 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:36:22.888249 | orchestrator | 2025-05-13 23:36:22.888259 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-05-13 23:36:22.888269 | orchestrator | Tuesday 13 May 2025 23:33:51 +0000 (0:00:00.833) 0:01:42.265 *********** 2025-05-13 23:36:22.888278 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:36:22.888287 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:36:22.888297 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:36:22.888306 | orchestrator | 2025-05-13 23:36:22.888315 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-05-13 23:36:22.888325 | orchestrator | Tuesday 13 May 2025 23:33:52 +0000 (0:00:00.675) 0:01:42.941 *********** 2025-05-13 23:36:22.888334 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:36:22.888344 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:36:22.888353 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:36:22.888362 | orchestrator | 2025-05-13 23:36:22.888372 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-05-13 23:36:22.888381 | orchestrator | Tuesday 13 May 2025 23:33:52 +0000 (0:00:00.662) 0:01:43.603 *********** 2025-05-13 23:36:22.888391 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:36:22.888400 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:36:22.888416 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:36:22.888426 | orchestrator | 2025-05-13 23:36:22.888435 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-05-13 23:36:22.888445 | orchestrator | Tuesday 13 May 2025 23:33:53 +0000 (0:00:00.955) 0:01:44.558 *********** 2025-05-13 23:36:22.888461 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:36:22.888471 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:36:22.888480 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:36:22.888490 | orchestrator | 2025-05-13 23:36:22.888499 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-05-13 23:36:22.888509 | orchestrator | Tuesday 13 May 2025 23:33:54 +0000 (0:00:00.340) 0:01:44.899 *********** 2025-05-13 23:36:22.888518 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:36:22.888528 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:36:22.888537 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:36:22.888547 | orchestrator | 2025-05-13 23:36:22.888556 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-05-13 23:36:22.888566 | orchestrator | Tuesday 13 May 2025 23:33:54 +0000 (0:00:00.702) 0:01:45.602 *********** 2025-05-13 23:36:22.888575 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:36:22.888584 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:36:22.888599 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:36:22.888608 | orchestrator | 2025-05-13 23:36:22.888618 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-05-13 23:36:22.888628 | orchestrator | Tuesday 13 May 2025 23:33:55 +0000 (0:00:00.788) 0:01:46.390 *********** 2025-05-13 23:36:22.888637 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:36:22.888647 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:36:22.888657 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:36:22.888684 | orchestrator | 2025-05-13 23:36:22.888694 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-05-13 23:36:22.888703 | orchestrator | Tuesday 13 May 2025 23:33:57 +0000 (0:00:01.384) 0:01:47.775 *********** 2025-05-13 23:36:22.888713 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:36:22.888722 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:36:22.888732 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:36:22.888741 | orchestrator | 2025-05-13 23:36:22.888750 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-05-13 23:36:22.888760 | orchestrator | Tuesday 13 May 2025 23:33:57 +0000 (0:00:00.832) 0:01:48.608 *********** 2025-05-13 23:36:22.888769 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:36:22.888779 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:36:22.888788 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:36:22.888798 | orchestrator | 2025-05-13 23:36:22.888807 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-05-13 23:36:22.888816 | orchestrator | Tuesday 13 May 2025 23:33:58 +0000 (0:00:00.280) 0:01:48.888 *********** 2025-05-13 23:36:22.888826 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:36:22.888835 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:36:22.888844 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:36:22.888854 | orchestrator | 2025-05-13 23:36:22.888863 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-05-13 23:36:22.888873 | orchestrator | Tuesday 13 May 2025 23:33:58 +0000 (0:00:00.350) 0:01:49.238 *********** 2025-05-13 23:36:22.888882 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:36:22.888891 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:36:22.888901 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:36:22.888910 | orchestrator | 2025-05-13 23:36:22.888919 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-05-13 23:36:22.888929 | orchestrator | Tuesday 13 May 2025 23:33:59 +0000 (0:00:01.161) 0:01:50.400 *********** 2025-05-13 23:36:22.888938 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:36:22.888948 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:36:22.888957 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:36:22.888967 | orchestrator | 2025-05-13 23:36:22.888983 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-05-13 23:36:22.888993 | orchestrator | Tuesday 13 May 2025 23:34:00 +0000 (0:00:00.734) 0:01:51.135 *********** 2025-05-13 23:36:22.889002 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-05-13 23:36:22.889012 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-05-13 23:36:22.889021 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-05-13 23:36:22.889030 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-05-13 23:36:22.889040 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-05-13 23:36:22.889049 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-05-13 23:36:22.889059 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-05-13 23:36:22.889068 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-05-13 23:36:22.889077 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-05-13 23:36:22.889087 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-05-13 23:36:22.889097 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-05-13 23:36:22.889106 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-05-13 23:36:22.889115 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-05-13 23:36:22.889124 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-05-13 23:36:22.889134 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-05-13 23:36:22.889143 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-05-13 23:36:22.889158 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-05-13 23:36:22.889168 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-05-13 23:36:22.889177 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-05-13 23:36:22.889187 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-05-13 23:36:22.889196 | orchestrator | 2025-05-13 23:36:22.889206 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-05-13 23:36:22.889215 | orchestrator | 2025-05-13 23:36:22.889225 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-05-13 23:36:22.889239 | orchestrator | Tuesday 13 May 2025 23:34:03 +0000 (0:00:03.455) 0:01:54.590 *********** 2025-05-13 23:36:22.889249 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:36:22.889258 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:36:22.889268 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:36:22.889277 | orchestrator | 2025-05-13 23:36:22.889287 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-05-13 23:36:22.889297 | orchestrator | Tuesday 13 May 2025 23:34:04 +0000 (0:00:00.732) 0:01:55.323 *********** 2025-05-13 23:36:22.889306 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:36:22.889315 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:36:22.889325 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:36:22.889334 | orchestrator | 2025-05-13 23:36:22.889344 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-05-13 23:36:22.889353 | orchestrator | Tuesday 13 May 2025 23:34:05 +0000 (0:00:00.716) 0:01:56.039 *********** 2025-05-13 23:36:22.889375 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:36:22.889384 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:36:22.889394 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:36:22.889403 | orchestrator | 2025-05-13 23:36:22.889413 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-05-13 23:36:22.889422 | orchestrator | Tuesday 13 May 2025 23:34:05 +0000 (0:00:00.374) 0:01:56.413 *********** 2025-05-13 23:36:22.889431 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:36:22.889441 | orchestrator | 2025-05-13 23:36:22.889450 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-05-13 23:36:22.889460 | orchestrator | Tuesday 13 May 2025 23:34:06 +0000 (0:00:00.746) 0:01:57.160 *********** 2025-05-13 23:36:22.889470 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:36:22.889479 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:36:22.889489 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:36:22.889498 | orchestrator | 2025-05-13 23:36:22.889508 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-05-13 23:36:22.889517 | orchestrator | Tuesday 13 May 2025 23:34:06 +0000 (0:00:00.345) 0:01:57.506 *********** 2025-05-13 23:36:22.889526 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:36:22.889536 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:36:22.889546 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:36:22.889555 | orchestrator | 2025-05-13 23:36:22.889565 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-05-13 23:36:22.889574 | orchestrator | Tuesday 13 May 2025 23:34:07 +0000 (0:00:00.366) 0:01:57.872 *********** 2025-05-13 23:36:22.889584 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:36:22.889593 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:36:22.889602 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:36:22.889612 | orchestrator | 2025-05-13 23:36:22.889621 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-05-13 23:36:22.889631 | orchestrator | Tuesday 13 May 2025 23:34:07 +0000 (0:00:00.333) 0:01:58.206 *********** 2025-05-13 23:36:22.889640 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:36:22.889649 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:36:22.889659 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:36:22.889719 | orchestrator | 2025-05-13 23:36:22.889729 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-05-13 23:36:22.889739 | orchestrator | Tuesday 13 May 2025 23:34:09 +0000 (0:00:02.056) 0:02:00.263 *********** 2025-05-13 23:36:22.889749 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:36:22.889758 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:36:22.889768 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:36:22.889777 | orchestrator | 2025-05-13 23:36:22.889787 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-05-13 23:36:22.889796 | orchestrator | 2025-05-13 23:36:22.889805 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-05-13 23:36:22.889815 | orchestrator | Tuesday 13 May 2025 23:34:20 +0000 (0:00:10.532) 0:02:10.796 *********** 2025-05-13 23:36:22.889824 | orchestrator | ok: [testbed-manager] 2025-05-13 23:36:22.889834 | orchestrator | 2025-05-13 23:36:22.889843 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-05-13 23:36:22.889853 | orchestrator | Tuesday 13 May 2025 23:34:20 +0000 (0:00:00.934) 0:02:11.730 *********** 2025-05-13 23:36:22.889862 | orchestrator | changed: [testbed-manager] 2025-05-13 23:36:22.889871 | orchestrator | 2025-05-13 23:36:22.889881 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-05-13 23:36:22.889890 | orchestrator | Tuesday 13 May 2025 23:34:21 +0000 (0:00:00.451) 0:02:12.182 *********** 2025-05-13 23:36:22.889897 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-05-13 23:36:22.889905 | orchestrator | 2025-05-13 23:36:22.889913 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-05-13 23:36:22.889927 | orchestrator | Tuesday 13 May 2025 23:34:22 +0000 (0:00:00.991) 0:02:13.173 *********** 2025-05-13 23:36:22.889935 | orchestrator | changed: [testbed-manager] 2025-05-13 23:36:22.889943 | orchestrator | 2025-05-13 23:36:22.889951 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-05-13 23:36:22.889958 | orchestrator | Tuesday 13 May 2025 23:34:23 +0000 (0:00:00.789) 0:02:13.962 *********** 2025-05-13 23:36:22.889966 | orchestrator | changed: [testbed-manager] 2025-05-13 23:36:22.889974 | orchestrator | 2025-05-13 23:36:22.889982 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-05-13 23:36:22.889995 | orchestrator | Tuesday 13 May 2025 23:34:23 +0000 (0:00:00.573) 0:02:14.536 *********** 2025-05-13 23:36:22.890003 | orchestrator | changed: [testbed-manager -> localhost] 2025-05-13 23:36:22.890011 | orchestrator | 2025-05-13 23:36:22.890051 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-05-13 23:36:22.890059 | orchestrator | Tuesday 13 May 2025 23:34:25 +0000 (0:00:01.584) 0:02:16.120 *********** 2025-05-13 23:36:22.890067 | orchestrator | changed: [testbed-manager -> localhost] 2025-05-13 23:36:22.890075 | orchestrator | 2025-05-13 23:36:22.890082 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-05-13 23:36:22.890090 | orchestrator | Tuesday 13 May 2025 23:34:26 +0000 (0:00:00.844) 0:02:16.965 *********** 2025-05-13 23:36:22.890098 | orchestrator | changed: [testbed-manager] 2025-05-13 23:36:22.890106 | orchestrator | 2025-05-13 23:36:22.890118 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-05-13 23:36:22.890126 | orchestrator | Tuesday 13 May 2025 23:34:26 +0000 (0:00:00.415) 0:02:17.380 *********** 2025-05-13 23:36:22.890133 | orchestrator | changed: [testbed-manager] 2025-05-13 23:36:22.890141 | orchestrator | 2025-05-13 23:36:22.890149 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-05-13 23:36:22.890157 | orchestrator | 2025-05-13 23:36:22.890164 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2025-05-13 23:36:22.890172 | orchestrator | Tuesday 13 May 2025 23:34:27 +0000 (0:00:00.452) 0:02:17.833 *********** 2025-05-13 23:36:22.890180 | orchestrator | ok: [testbed-manager] 2025-05-13 23:36:22.890187 | orchestrator | 2025-05-13 23:36:22.890195 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2025-05-13 23:36:22.890202 | orchestrator | Tuesday 13 May 2025 23:34:27 +0000 (0:00:00.146) 0:02:17.980 *********** 2025-05-13 23:36:22.890210 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-05-13 23:36:22.890218 | orchestrator | 2025-05-13 23:36:22.890226 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2025-05-13 23:36:22.890233 | orchestrator | Tuesday 13 May 2025 23:34:27 +0000 (0:00:00.401) 0:02:18.381 *********** 2025-05-13 23:36:22.890242 | orchestrator | ok: [testbed-manager] 2025-05-13 23:36:22.890249 | orchestrator | 2025-05-13 23:36:22.890257 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2025-05-13 23:36:22.890265 | orchestrator | Tuesday 13 May 2025 23:34:28 +0000 (0:00:00.855) 0:02:19.237 *********** 2025-05-13 23:36:22.890273 | orchestrator | ok: [testbed-manager] 2025-05-13 23:36:22.890281 | orchestrator | 2025-05-13 23:36:22.890288 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2025-05-13 23:36:22.890296 | orchestrator | Tuesday 13 May 2025 23:34:30 +0000 (0:00:01.657) 0:02:20.894 *********** 2025-05-13 23:36:22.890304 | orchestrator | changed: [testbed-manager] 2025-05-13 23:36:22.890312 | orchestrator | 2025-05-13 23:36:22.890320 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2025-05-13 23:36:22.890327 | orchestrator | Tuesday 13 May 2025 23:34:30 +0000 (0:00:00.797) 0:02:21.691 *********** 2025-05-13 23:36:22.890335 | orchestrator | ok: [testbed-manager] 2025-05-13 23:36:22.890343 | orchestrator | 2025-05-13 23:36:22.890351 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2025-05-13 23:36:22.890358 | orchestrator | Tuesday 13 May 2025 23:34:31 +0000 (0:00:00.481) 0:02:22.173 *********** 2025-05-13 23:36:22.890376 | orchestrator | changed: [testbed-manager] 2025-05-13 23:36:22.890384 | orchestrator | 2025-05-13 23:36:22.890391 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2025-05-13 23:36:22.890399 | orchestrator | Tuesday 13 May 2025 23:34:38 +0000 (0:00:06.927) 0:02:29.101 *********** 2025-05-13 23:36:22.890407 | orchestrator | changed: [testbed-manager] 2025-05-13 23:36:22.890415 | orchestrator | 2025-05-13 23:36:22.890422 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2025-05-13 23:36:22.890430 | orchestrator | Tuesday 13 May 2025 23:34:50 +0000 (0:00:12.385) 0:02:41.486 *********** 2025-05-13 23:36:22.890438 | orchestrator | ok: [testbed-manager] 2025-05-13 23:36:22.890446 | orchestrator | 2025-05-13 23:36:22.890454 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-05-13 23:36:22.890462 | orchestrator | 2025-05-13 23:36:22.890469 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-05-13 23:36:22.890477 | orchestrator | Tuesday 13 May 2025 23:34:51 +0000 (0:00:00.516) 0:02:42.003 *********** 2025-05-13 23:36:22.890485 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:36:22.890492 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:36:22.890500 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:36:22.890508 | orchestrator | 2025-05-13 23:36:22.890516 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-05-13 23:36:22.890524 | orchestrator | Tuesday 13 May 2025 23:34:51 +0000 (0:00:00.484) 0:02:42.488 *********** 2025-05-13 23:36:22.890532 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:36:22.890539 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:36:22.890547 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:36:22.890555 | orchestrator | 2025-05-13 23:36:22.890563 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-05-13 23:36:22.890570 | orchestrator | Tuesday 13 May 2025 23:34:52 +0000 (0:00:00.330) 0:02:42.818 *********** 2025-05-13 23:36:22.890578 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:36:22.890586 | orchestrator | 2025-05-13 23:36:22.890594 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2025-05-13 23:36:22.890602 | orchestrator | Tuesday 13 May 2025 23:34:52 +0000 (0:00:00.510) 0:02:43.329 *********** 2025-05-13 23:36:22.890609 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-05-13 23:36:22.890617 | orchestrator | 2025-05-13 23:36:22.890625 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-05-13 23:36:22.890632 | orchestrator | Tuesday 13 May 2025 23:34:53 +0000 (0:00:00.892) 0:02:44.221 *********** 2025-05-13 23:36:22.890640 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-13 23:36:22.890648 | orchestrator | 2025-05-13 23:36:22.890656 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-05-13 23:36:22.890682 | orchestrator | Tuesday 13 May 2025 23:34:54 +0000 (0:00:00.946) 0:02:45.168 *********** 2025-05-13 23:36:22.890690 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:36:22.890698 | orchestrator | 2025-05-13 23:36:22.890706 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-05-13 23:36:22.890714 | orchestrator | Tuesday 13 May 2025 23:34:55 +0000 (0:00:00.755) 0:02:45.923 *********** 2025-05-13 23:36:22.890722 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-13 23:36:22.890730 | orchestrator | 2025-05-13 23:36:22.890737 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-05-13 23:36:22.890745 | orchestrator | Tuesday 13 May 2025 23:34:56 +0000 (0:00:01.122) 0:02:47.046 *********** 2025-05-13 23:36:22.890753 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:36:22.890761 | orchestrator | 2025-05-13 23:36:22.890768 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-05-13 23:36:22.890781 | orchestrator | Tuesday 13 May 2025 23:34:56 +0000 (0:00:00.203) 0:02:47.250 *********** 2025-05-13 23:36:22.890789 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:36:22.890802 | orchestrator | 2025-05-13 23:36:22.890810 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-05-13 23:36:22.890818 | orchestrator | Tuesday 13 May 2025 23:34:56 +0000 (0:00:00.233) 0:02:47.483 *********** 2025-05-13 23:36:22.890826 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:36:22.890833 | orchestrator | 2025-05-13 23:36:22.890841 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-05-13 23:36:22.890849 | orchestrator | Tuesday 13 May 2025 23:34:56 +0000 (0:00:00.197) 0:02:47.680 *********** 2025-05-13 23:36:22.890857 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:36:22.890864 | orchestrator | 2025-05-13 23:36:22.890872 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-05-13 23:36:22.890880 | orchestrator | Tuesday 13 May 2025 23:34:57 +0000 (0:00:00.207) 0:02:47.887 *********** 2025-05-13 23:36:22.890888 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-05-13 23:36:22.890896 | orchestrator | 2025-05-13 23:36:22.890903 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-05-13 23:36:22.890911 | orchestrator | Tuesday 13 May 2025 23:35:02 +0000 (0:00:05.540) 0:02:53.428 *********** 2025-05-13 23:36:22.890919 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2025-05-13 23:36:22.890926 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2025-05-13 23:36:22.890934 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2025-05-13 23:36:22.890942 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2025-05-13 23:36:22.890950 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2025-05-13 23:36:22.890957 | orchestrator | 2025-05-13 23:36:22.890965 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-05-13 23:36:22.890972 | orchestrator | Tuesday 13 May 2025 23:35:51 +0000 (0:00:49.203) 0:03:42.632 *********** 2025-05-13 23:36:22.890980 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-13 23:36:22.890988 | orchestrator | 2025-05-13 23:36:22.890996 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-05-13 23:36:22.891004 | orchestrator | Tuesday 13 May 2025 23:35:53 +0000 (0:00:01.487) 0:03:44.120 *********** 2025-05-13 23:36:22.891011 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-05-13 23:36:22.891019 | orchestrator | 2025-05-13 23:36:22.891027 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2025-05-13 23:36:22.891034 | orchestrator | Tuesday 13 May 2025 23:35:55 +0000 (0:00:01.648) 0:03:45.769 *********** 2025-05-13 23:36:22.891042 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-05-13 23:36:22.891050 | orchestrator | 2025-05-13 23:36:22.891058 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2025-05-13 23:36:22.891065 | orchestrator | Tuesday 13 May 2025 23:35:56 +0000 (0:00:01.090) 0:03:46.860 *********** 2025-05-13 23:36:22.891073 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:36:22.891081 | orchestrator | 2025-05-13 23:36:22.891088 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2025-05-13 23:36:22.891096 | orchestrator | Tuesday 13 May 2025 23:35:56 +0000 (0:00:00.200) 0:03:47.060 *********** 2025-05-13 23:36:22.891104 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2025-05-13 23:36:22.891112 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2025-05-13 23:36:22.891120 | orchestrator | 2025-05-13 23:36:22.891127 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2025-05-13 23:36:22.891135 | orchestrator | Tuesday 13 May 2025 23:35:59 +0000 (0:00:02.843) 0:03:49.904 *********** 2025-05-13 23:36:22.891142 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:36:22.891150 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:36:22.891158 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:36:22.891166 | orchestrator | 2025-05-13 23:36:22.891173 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2025-05-13 23:36:22.891186 | orchestrator | Tuesday 13 May 2025 23:35:59 +0000 (0:00:00.397) 0:03:50.302 *********** 2025-05-13 23:36:22.891194 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:36:22.891202 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:36:22.891209 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:36:22.891217 | orchestrator | 2025-05-13 23:36:22.891225 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2025-05-13 23:36:22.891233 | orchestrator | 2025-05-13 23:36:22.891240 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2025-05-13 23:36:22.891248 | orchestrator | Tuesday 13 May 2025 23:36:00 +0000 (0:00:00.900) 0:03:51.202 *********** 2025-05-13 23:36:22.891256 | orchestrator | ok: [testbed-manager] 2025-05-13 23:36:22.891263 | orchestrator | 2025-05-13 23:36:22.891271 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2025-05-13 23:36:22.891279 | orchestrator | Tuesday 13 May 2025 23:36:00 +0000 (0:00:00.172) 0:03:51.374 *********** 2025-05-13 23:36:22.891291 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2025-05-13 23:36:22.891299 | orchestrator | 2025-05-13 23:36:22.891307 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2025-05-13 23:36:22.891314 | orchestrator | Tuesday 13 May 2025 23:36:01 +0000 (0:00:00.539) 0:03:51.914 *********** 2025-05-13 23:36:22.891322 | orchestrator | changed: [testbed-manager] 2025-05-13 23:36:22.891330 | orchestrator | 2025-05-13 23:36:22.891337 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2025-05-13 23:36:22.891345 | orchestrator | 2025-05-13 23:36:22.891353 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2025-05-13 23:36:22.891360 | orchestrator | Tuesday 13 May 2025 23:36:07 +0000 (0:00:06.508) 0:03:58.423 *********** 2025-05-13 23:36:22.891368 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:36:22.891375 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:36:22.891387 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:36:22.891395 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:36:22.891403 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:36:22.891410 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:36:22.891418 | orchestrator | 2025-05-13 23:36:22.891426 | orchestrator | TASK [Manage labels] *********************************************************** 2025-05-13 23:36:22.891434 | orchestrator | Tuesday 13 May 2025 23:36:08 +0000 (0:00:00.603) 0:03:59.027 *********** 2025-05-13 23:36:22.891442 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-05-13 23:36:22.891449 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-05-13 23:36:22.891457 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-05-13 23:36:22.891465 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-05-13 23:36:22.891472 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-05-13 23:36:22.891480 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-05-13 23:36:22.891488 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-05-13 23:36:22.891495 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-05-13 23:36:22.891503 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-05-13 23:36:22.891510 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2025-05-13 23:36:22.891518 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2025-05-13 23:36:22.891526 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-05-13 23:36:22.891533 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2025-05-13 23:36:22.891546 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-05-13 23:36:22.891554 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-05-13 23:36:22.891562 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-05-13 23:36:22.891569 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-05-13 23:36:22.891577 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-05-13 23:36:22.891585 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-05-13 23:36:22.891592 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-05-13 23:36:22.891600 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-05-13 23:36:22.891607 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-05-13 23:36:22.891615 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-05-13 23:36:22.891623 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-05-13 23:36:22.891630 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-05-13 23:36:22.891638 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-05-13 23:36:22.891646 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-05-13 23:36:22.891653 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-05-13 23:36:22.891674 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-05-13 23:36:22.891683 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-05-13 23:36:22.891690 | orchestrator | 2025-05-13 23:36:22.891698 | orchestrator | TASK [Manage annotations] ****************************************************** 2025-05-13 23:36:22.891706 | orchestrator | Tuesday 13 May 2025 23:36:21 +0000 (0:00:13.053) 0:04:12.080 *********** 2025-05-13 23:36:22.891713 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:36:22.891721 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:36:22.891729 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:36:22.891739 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:36:22.891752 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:36:22.891767 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:36:22.891789 | orchestrator | 2025-05-13 23:36:22.891803 | orchestrator | TASK [Manage taints] *********************************************************** 2025-05-13 23:36:22.891824 | orchestrator | Tuesday 13 May 2025 23:36:21 +0000 (0:00:00.433) 0:04:12.514 *********** 2025-05-13 23:36:22.891837 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:36:22.891850 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:36:22.891862 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:36:22.891874 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:36:22.891886 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:36:22.891898 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:36:22.891910 | orchestrator | 2025-05-13 23:36:22.891923 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 23:36:22.891936 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 23:36:22.891958 | orchestrator | testbed-node-0 : ok=46  changed=21  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-05-13 23:36:22.891972 | orchestrator | testbed-node-1 : ok=34  changed=14  unreachable=0 failed=0 skipped=24  rescued=0 ignored=0 2025-05-13 23:36:22.891986 | orchestrator | testbed-node-2 : ok=34  changed=14  unreachable=0 failed=0 skipped=24  rescued=0 ignored=0 2025-05-13 23:36:22.892009 | orchestrator | testbed-node-3 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-05-13 23:36:22.892022 | orchestrator | testbed-node-4 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-05-13 23:36:22.892036 | orchestrator | testbed-node-5 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-05-13 23:36:22.892048 | orchestrator | 2025-05-13 23:36:22.892062 | orchestrator | 2025-05-13 23:36:22.892076 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 23:36:22.892089 | orchestrator | Tuesday 13 May 2025 23:36:22 +0000 (0:00:00.526) 0:04:13.040 *********** 2025-05-13 23:36:22.892102 | orchestrator | =============================================================================== 2025-05-13 23:36:22.892114 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 49.20s 2025-05-13 23:36:22.892122 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 45.82s 2025-05-13 23:36:22.892129 | orchestrator | Manage labels ---------------------------------------------------------- 13.05s 2025-05-13 23:36:22.892137 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 13.02s 2025-05-13 23:36:22.892144 | orchestrator | kubectl : Install required packages ------------------------------------ 12.39s 2025-05-13 23:36:22.892152 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 10.53s 2025-05-13 23:36:22.892159 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 6.93s 2025-05-13 23:36:22.892167 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 6.76s 2025-05-13 23:36:22.892174 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 6.51s 2025-05-13 23:36:22.892182 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 5.54s 2025-05-13 23:36:22.892190 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.46s 2025-05-13 23:36:22.892198 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.84s 2025-05-13 23:36:22.892205 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 2.53s 2025-05-13 23:36:22.892213 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 2.49s 2025-05-13 23:36:22.892221 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 2.38s 2025-05-13 23:36:22.892228 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.23s 2025-05-13 23:36:22.892235 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.20s 2025-05-13 23:36:22.892243 | orchestrator | k3s_agent : Configure the k3s service ----------------------------------- 2.06s 2025-05-13 23:36:22.892250 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 1.66s 2025-05-13 23:36:22.892258 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 1.65s 2025-05-13 23:36:22.892266 | orchestrator | 2025-05-13 23:36:22 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:36:22.892278 | orchestrator | 2025-05-13 23:36:22 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:36:22.892291 | orchestrator | 2025-05-13 23:36:22 | INFO  | Task bd4c2ed2-1b3d-48bb-863b-655b659cf7e5 is in state STARTED 2025-05-13 23:36:22.892305 | orchestrator | 2025-05-13 23:36:22 | INFO  | Task 2ba0f24e-ffde-40ca-8de4-8585eab5387a is in state STARTED 2025-05-13 23:36:22.892317 | orchestrator | 2025-05-13 23:36:22 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:36:25.918450 | orchestrator | 2025-05-13 23:36:25 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:36:25.919892 | orchestrator | 2025-05-13 23:36:25 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:36:25.920466 | orchestrator | 2025-05-13 23:36:25 | INFO  | Task bd4c2ed2-1b3d-48bb-863b-655b659cf7e5 is in state STARTED 2025-05-13 23:36:25.921261 | orchestrator | 2025-05-13 23:36:25 | INFO  | Task 2ba0f24e-ffde-40ca-8de4-8585eab5387a is in state STARTED 2025-05-13 23:36:25.922649 | orchestrator | 2025-05-13 23:36:25 | INFO  | Task 1cf57c11-6133-4542-87be-2c996afb3207 is in state STARTED 2025-05-13 23:36:25.923330 | orchestrator | 2025-05-13 23:36:25 | INFO  | Task 0000342a-b616-4f73-b62f-54a530ff376a is in state STARTED 2025-05-13 23:36:25.923616 | orchestrator | 2025-05-13 23:36:25 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:36:28.977059 | orchestrator | 2025-05-13 23:36:28 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:36:28.978921 | orchestrator | 2025-05-13 23:36:28 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:36:28.979721 | orchestrator | 2025-05-13 23:36:28 | INFO  | Task bd4c2ed2-1b3d-48bb-863b-655b659cf7e5 is in state STARTED 2025-05-13 23:36:28.980907 | orchestrator | 2025-05-13 23:36:28 | INFO  | Task 2ba0f24e-ffde-40ca-8de4-8585eab5387a is in state STARTED 2025-05-13 23:36:28.983869 | orchestrator | 2025-05-13 23:36:28 | INFO  | Task 1cf57c11-6133-4542-87be-2c996afb3207 is in state STARTED 2025-05-13 23:36:28.985348 | orchestrator | 2025-05-13 23:36:28 | INFO  | Task 0000342a-b616-4f73-b62f-54a530ff376a is in state STARTED 2025-05-13 23:36:28.985387 | orchestrator | 2025-05-13 23:36:28 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:36:32.026899 | orchestrator | 2025-05-13 23:36:32 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:36:32.026972 | orchestrator | 2025-05-13 23:36:32 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:36:32.028156 | orchestrator | 2025-05-13 23:36:32 | INFO  | Task bd4c2ed2-1b3d-48bb-863b-655b659cf7e5 is in state STARTED 2025-05-13 23:36:32.029915 | orchestrator | 2025-05-13 23:36:32 | INFO  | Task 2ba0f24e-ffde-40ca-8de4-8585eab5387a is in state STARTED 2025-05-13 23:36:32.030950 | orchestrator | 2025-05-13 23:36:32 | INFO  | Task 1cf57c11-6133-4542-87be-2c996afb3207 is in state STARTED 2025-05-13 23:36:32.031475 | orchestrator | 2025-05-13 23:36:32 | INFO  | Task 0000342a-b616-4f73-b62f-54a530ff376a is in state SUCCESS 2025-05-13 23:36:32.031604 | orchestrator | 2025-05-13 23:36:32 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:36:35.070306 | orchestrator | 2025-05-13 23:36:35 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:36:35.070538 | orchestrator | 2025-05-13 23:36:35 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:36:35.071238 | orchestrator | 2025-05-13 23:36:35 | INFO  | Task bd4c2ed2-1b3d-48bb-863b-655b659cf7e5 is in state STARTED 2025-05-13 23:36:35.071944 | orchestrator | 2025-05-13 23:36:35 | INFO  | Task 2ba0f24e-ffde-40ca-8de4-8585eab5387a is in state STARTED 2025-05-13 23:36:35.072538 | orchestrator | 2025-05-13 23:36:35 | INFO  | Task 1cf57c11-6133-4542-87be-2c996afb3207 is in state SUCCESS 2025-05-13 23:36:35.072571 | orchestrator | 2025-05-13 23:36:35 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:36:38.110824 | orchestrator | 2025-05-13 23:36:38 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:36:38.114161 | orchestrator | 2025-05-13 23:36:38 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:36:38.115193 | orchestrator | 2025-05-13 23:36:38 | INFO  | Task bd4c2ed2-1b3d-48bb-863b-655b659cf7e5 is in state STARTED 2025-05-13 23:36:38.116419 | orchestrator | 2025-05-13 23:36:38 | INFO  | Task 2ba0f24e-ffde-40ca-8de4-8585eab5387a is in state STARTED 2025-05-13 23:36:38.116449 | orchestrator | 2025-05-13 23:36:38 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:36:41.167242 | orchestrator | 2025-05-13 23:36:41 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:36:41.167789 | orchestrator | 2025-05-13 23:36:41 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:36:41.168486 | orchestrator | 2025-05-13 23:36:41 | INFO  | Task bd4c2ed2-1b3d-48bb-863b-655b659cf7e5 is in state STARTED 2025-05-13 23:36:41.169849 | orchestrator | 2025-05-13 23:36:41 | INFO  | Task 2ba0f24e-ffde-40ca-8de4-8585eab5387a is in state STARTED 2025-05-13 23:36:41.169892 | orchestrator | 2025-05-13 23:36:41 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:36:44.222249 | orchestrator | 2025-05-13 23:36:44 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:36:44.222958 | orchestrator | 2025-05-13 23:36:44 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:36:44.224713 | orchestrator | 2025-05-13 23:36:44 | INFO  | Task bd4c2ed2-1b3d-48bb-863b-655b659cf7e5 is in state STARTED 2025-05-13 23:36:44.226094 | orchestrator | 2025-05-13 23:36:44 | INFO  | Task 2ba0f24e-ffde-40ca-8de4-8585eab5387a is in state STARTED 2025-05-13 23:36:44.226156 | orchestrator | 2025-05-13 23:36:44 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:36:47.266530 | orchestrator | 2025-05-13 23:36:47 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:36:47.268733 | orchestrator | 2025-05-13 23:36:47 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:36:47.268823 | orchestrator | 2025-05-13 23:36:47 | INFO  | Task bd4c2ed2-1b3d-48bb-863b-655b659cf7e5 is in state STARTED 2025-05-13 23:36:47.269470 | orchestrator | 2025-05-13 23:36:47 | INFO  | Task 2ba0f24e-ffde-40ca-8de4-8585eab5387a is in state STARTED 2025-05-13 23:36:47.269493 | orchestrator | 2025-05-13 23:36:47 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:36:50.325433 | orchestrator | 2025-05-13 23:36:50 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:36:50.326836 | orchestrator | 2025-05-13 23:36:50 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:36:50.331640 | orchestrator | 2025-05-13 23:36:50 | INFO  | Task bd4c2ed2-1b3d-48bb-863b-655b659cf7e5 is in state STARTED 2025-05-13 23:36:50.334592 | orchestrator | 2025-05-13 23:36:50 | INFO  | Task 2ba0f24e-ffde-40ca-8de4-8585eab5387a is in state STARTED 2025-05-13 23:36:50.334649 | orchestrator | 2025-05-13 23:36:50 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:36:53.392773 | orchestrator | 2025-05-13 23:36:53 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:36:53.393237 | orchestrator | 2025-05-13 23:36:53 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:36:53.394741 | orchestrator | 2025-05-13 23:36:53 | INFO  | Task bd4c2ed2-1b3d-48bb-863b-655b659cf7e5 is in state STARTED 2025-05-13 23:36:53.394786 | orchestrator | 2025-05-13 23:36:53 | INFO  | Task 2ba0f24e-ffde-40ca-8de4-8585eab5387a is in state STARTED 2025-05-13 23:36:53.394799 | orchestrator | 2025-05-13 23:36:53 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:36:56.438278 | orchestrator | 2025-05-13 23:36:56 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:36:56.442297 | orchestrator | 2025-05-13 23:36:56 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:36:56.443249 | orchestrator | 2025-05-13 23:36:56 | INFO  | Task bd4c2ed2-1b3d-48bb-863b-655b659cf7e5 is in state STARTED 2025-05-13 23:36:56.448874 | orchestrator | 2025-05-13 23:36:56 | INFO  | Task 2ba0f24e-ffde-40ca-8de4-8585eab5387a is in state STARTED 2025-05-13 23:36:56.448942 | orchestrator | 2025-05-13 23:36:56 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:36:59.498859 | orchestrator | 2025-05-13 23:36:59 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:36:59.502822 | orchestrator | 2025-05-13 23:36:59 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:36:59.504068 | orchestrator | 2025-05-13 23:36:59 | INFO  | Task bd4c2ed2-1b3d-48bb-863b-655b659cf7e5 is in state STARTED 2025-05-13 23:36:59.504967 | orchestrator | 2025-05-13 23:36:59 | INFO  | Task 2ba0f24e-ffde-40ca-8de4-8585eab5387a is in state STARTED 2025-05-13 23:36:59.505960 | orchestrator | 2025-05-13 23:36:59 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:37:02.572822 | orchestrator | 2025-05-13 23:37:02 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:37:02.574449 | orchestrator | 2025-05-13 23:37:02 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:37:02.576369 | orchestrator | 2025-05-13 23:37:02 | INFO  | Task bd4c2ed2-1b3d-48bb-863b-655b659cf7e5 is in state STARTED 2025-05-13 23:37:02.577758 | orchestrator | 2025-05-13 23:37:02 | INFO  | Task 2ba0f24e-ffde-40ca-8de4-8585eab5387a is in state STARTED 2025-05-13 23:37:02.577798 | orchestrator | 2025-05-13 23:37:02 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:37:05.641108 | orchestrator | 2025-05-13 23:37:05 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:37:05.641454 | orchestrator | 2025-05-13 23:37:05 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:37:05.642238 | orchestrator | 2025-05-13 23:37:05 | INFO  | Task bd4c2ed2-1b3d-48bb-863b-655b659cf7e5 is in state STARTED 2025-05-13 23:37:05.642879 | orchestrator | 2025-05-13 23:37:05 | INFO  | Task 2ba0f24e-ffde-40ca-8de4-8585eab5387a is in state STARTED 2025-05-13 23:37:05.642903 | orchestrator | 2025-05-13 23:37:05 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:37:08.699656 | orchestrator | 2025-05-13 23:37:08 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:37:08.700294 | orchestrator | 2025-05-13 23:37:08 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:37:08.701302 | orchestrator | 2025-05-13 23:37:08 | INFO  | Task bd4c2ed2-1b3d-48bb-863b-655b659cf7e5 is in state STARTED 2025-05-13 23:37:08.702177 | orchestrator | 2025-05-13 23:37:08 | INFO  | Task 2ba0f24e-ffde-40ca-8de4-8585eab5387a is in state STARTED 2025-05-13 23:37:08.702205 | orchestrator | 2025-05-13 23:37:08 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:37:11.743782 | orchestrator | 2025-05-13 23:37:11 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:37:11.744391 | orchestrator | 2025-05-13 23:37:11 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:37:11.745941 | orchestrator | 2025-05-13 23:37:11 | INFO  | Task bd4c2ed2-1b3d-48bb-863b-655b659cf7e5 is in state STARTED 2025-05-13 23:37:11.747452 | orchestrator | 2025-05-13 23:37:11 | INFO  | Task 2ba0f24e-ffde-40ca-8de4-8585eab5387a is in state STARTED 2025-05-13 23:37:11.747502 | orchestrator | 2025-05-13 23:37:11 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:37:14.797813 | orchestrator | 2025-05-13 23:37:14 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:37:14.800125 | orchestrator | 2025-05-13 23:37:14 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:37:14.801533 | orchestrator | 2025-05-13 23:37:14 | INFO  | Task bd4c2ed2-1b3d-48bb-863b-655b659cf7e5 is in state STARTED 2025-05-13 23:37:14.803075 | orchestrator | 2025-05-13 23:37:14 | INFO  | Task 2ba0f24e-ffde-40ca-8de4-8585eab5387a is in state STARTED 2025-05-13 23:37:14.803101 | orchestrator | 2025-05-13 23:37:14 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:37:17.867962 | orchestrator | 2025-05-13 23:37:17 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:37:17.869257 | orchestrator | 2025-05-13 23:37:17 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:37:17.869304 | orchestrator | 2025-05-13 23:37:17 | INFO  | Task bd4c2ed2-1b3d-48bb-863b-655b659cf7e5 is in state STARTED 2025-05-13 23:37:17.869920 | orchestrator | 2025-05-13 23:37:17 | INFO  | Task 2ba0f24e-ffde-40ca-8de4-8585eab5387a is in state STARTED 2025-05-13 23:37:17.869940 | orchestrator | 2025-05-13 23:37:17 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:37:20.923346 | orchestrator | 2025-05-13 23:37:20 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:37:20.924922 | orchestrator | 2025-05-13 23:37:20 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:37:20.928657 | orchestrator | 2025-05-13 23:37:20 | INFO  | Task bd4c2ed2-1b3d-48bb-863b-655b659cf7e5 is in state STARTED 2025-05-13 23:37:20.928948 | orchestrator | 2025-05-13 23:37:20 | INFO  | Task 2ba0f24e-ffde-40ca-8de4-8585eab5387a is in state STARTED 2025-05-13 23:37:20.928973 | orchestrator | 2025-05-13 23:37:20 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:37:23.977918 | orchestrator | 2025-05-13 23:37:23 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:37:23.978727 | orchestrator | 2025-05-13 23:37:23 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:37:23.979492 | orchestrator | 2025-05-13 23:37:23 | INFO  | Task bd4c2ed2-1b3d-48bb-863b-655b659cf7e5 is in state STARTED 2025-05-13 23:37:23.980675 | orchestrator | 2025-05-13 23:37:23 | INFO  | Task 2ba0f24e-ffde-40ca-8de4-8585eab5387a is in state STARTED 2025-05-13 23:37:23.980795 | orchestrator | 2025-05-13 23:37:23 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:37:27.029662 | orchestrator | 2025-05-13 23:37:27 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:37:27.031238 | orchestrator | 2025-05-13 23:37:27 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:37:27.033818 | orchestrator | 2025-05-13 23:37:27 | INFO  | Task bd4c2ed2-1b3d-48bb-863b-655b659cf7e5 is in state STARTED 2025-05-13 23:37:27.035475 | orchestrator | 2025-05-13 23:37:27 | INFO  | Task 2ba0f24e-ffde-40ca-8de4-8585eab5387a is in state STARTED 2025-05-13 23:37:27.035535 | orchestrator | 2025-05-13 23:37:27 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:37:30.089472 | orchestrator | 2025-05-13 23:37:30 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:37:30.089575 | orchestrator | 2025-05-13 23:37:30 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:37:30.090783 | orchestrator | 2025-05-13 23:37:30 | INFO  | Task bd4c2ed2-1b3d-48bb-863b-655b659cf7e5 is in state STARTED 2025-05-13 23:37:30.100323 | orchestrator | 2025-05-13 23:37:30 | INFO  | Task 2ba0f24e-ffde-40ca-8de4-8585eab5387a is in state STARTED 2025-05-13 23:37:30.100361 | orchestrator | 2025-05-13 23:37:30 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:37:33.143592 | orchestrator | 2025-05-13 23:37:33 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:37:33.147374 | orchestrator | 2025-05-13 23:37:33 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:37:33.148859 | orchestrator | 2025-05-13 23:37:33 | INFO  | Task bd4c2ed2-1b3d-48bb-863b-655b659cf7e5 is in state STARTED 2025-05-13 23:37:33.150907 | orchestrator | 2025-05-13 23:37:33 | INFO  | Task 2ba0f24e-ffde-40ca-8de4-8585eab5387a is in state STARTED 2025-05-13 23:37:33.150938 | orchestrator | 2025-05-13 23:37:33 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:37:36.204792 | orchestrator | 2025-05-13 23:37:36 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:37:36.205285 | orchestrator | 2025-05-13 23:37:36 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:37:36.206616 | orchestrator | 2025-05-13 23:37:36 | INFO  | Task bd4c2ed2-1b3d-48bb-863b-655b659cf7e5 is in state STARTED 2025-05-13 23:37:36.207664 | orchestrator | 2025-05-13 23:37:36 | INFO  | Task 2ba0f24e-ffde-40ca-8de4-8585eab5387a is in state STARTED 2025-05-13 23:37:36.207729 | orchestrator | 2025-05-13 23:37:36 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:37:39.250477 | orchestrator | 2025-05-13 23:37:39 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:37:39.250648 | orchestrator | 2025-05-13 23:37:39 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:37:39.250981 | orchestrator | 2025-05-13 23:37:39 | INFO  | Task bd4c2ed2-1b3d-48bb-863b-655b659cf7e5 is in state STARTED 2025-05-13 23:37:39.252045 | orchestrator | 2025-05-13 23:37:39 | INFO  | Task 2ba0f24e-ffde-40ca-8de4-8585eab5387a is in state STARTED 2025-05-13 23:37:39.252235 | orchestrator | 2025-05-13 23:37:39 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:37:42.285889 | orchestrator | 2025-05-13 23:37:42 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:37:42.287996 | orchestrator | 2025-05-13 23:37:42 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:37:42.288662 | orchestrator | 2025-05-13 23:37:42 | INFO  | Task bd4c2ed2-1b3d-48bb-863b-655b659cf7e5 is in state STARTED 2025-05-13 23:37:42.289483 | orchestrator | 2025-05-13 23:37:42 | INFO  | Task 2ba0f24e-ffde-40ca-8de4-8585eab5387a is in state STARTED 2025-05-13 23:37:42.289495 | orchestrator | 2025-05-13 23:37:42 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:37:45.328891 | orchestrator | 2025-05-13 23:37:45 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:37:45.330850 | orchestrator | 2025-05-13 23:37:45 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:37:45.332716 | orchestrator | 2025-05-13 23:37:45 | INFO  | Task bd4c2ed2-1b3d-48bb-863b-655b659cf7e5 is in state STARTED 2025-05-13 23:37:45.334679 | orchestrator | 2025-05-13 23:37:45 | INFO  | Task 2ba0f24e-ffde-40ca-8de4-8585eab5387a is in state SUCCESS 2025-05-13 23:37:45.336056 | orchestrator | 2025-05-13 23:37:45.336074 | orchestrator | 2025-05-13 23:37:45.336079 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-05-13 23:37:45.336099 | orchestrator | 2025-05-13 23:37:45.336104 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-05-13 23:37:45.336108 | orchestrator | Tuesday 13 May 2025 23:36:26 +0000 (0:00:00.212) 0:00:00.212 *********** 2025-05-13 23:37:45.336113 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-05-13 23:37:45.336117 | orchestrator | 2025-05-13 23:37:45.336121 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-05-13 23:37:45.336125 | orchestrator | Tuesday 13 May 2025 23:36:27 +0000 (0:00:00.827) 0:00:01.040 *********** 2025-05-13 23:37:45.336129 | orchestrator | changed: [testbed-manager] 2025-05-13 23:37:45.336134 | orchestrator | 2025-05-13 23:37:45.336138 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-05-13 23:37:45.336142 | orchestrator | Tuesday 13 May 2025 23:36:28 +0000 (0:00:01.246) 0:00:02.286 *********** 2025-05-13 23:37:45.336146 | orchestrator | changed: [testbed-manager] 2025-05-13 23:37:45.336150 | orchestrator | 2025-05-13 23:37:45.336154 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 23:37:45.336158 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 23:37:45.336164 | orchestrator | 2025-05-13 23:37:45.336168 | orchestrator | 2025-05-13 23:37:45.336172 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 23:37:45.336176 | orchestrator | Tuesday 13 May 2025 23:36:29 +0000 (0:00:00.358) 0:00:02.644 *********** 2025-05-13 23:37:45.336180 | orchestrator | =============================================================================== 2025-05-13 23:37:45.336184 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.25s 2025-05-13 23:37:45.336188 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.83s 2025-05-13 23:37:45.336192 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.36s 2025-05-13 23:37:45.336196 | orchestrator | 2025-05-13 23:37:45.336200 | orchestrator | 2025-05-13 23:37:45.336204 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-05-13 23:37:45.336207 | orchestrator | 2025-05-13 23:37:45.336211 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-05-13 23:37:45.336215 | orchestrator | Tuesday 13 May 2025 23:36:26 +0000 (0:00:00.143) 0:00:00.143 *********** 2025-05-13 23:37:45.336219 | orchestrator | ok: [testbed-manager] 2025-05-13 23:37:45.336224 | orchestrator | 2025-05-13 23:37:45.336228 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-05-13 23:37:45.336232 | orchestrator | Tuesday 13 May 2025 23:36:26 +0000 (0:00:00.495) 0:00:00.639 *********** 2025-05-13 23:37:45.336236 | orchestrator | ok: [testbed-manager] 2025-05-13 23:37:45.336240 | orchestrator | 2025-05-13 23:37:45.336244 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-05-13 23:37:45.336247 | orchestrator | Tuesday 13 May 2025 23:36:27 +0000 (0:00:00.527) 0:00:01.166 *********** 2025-05-13 23:37:45.336251 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-05-13 23:37:45.336255 | orchestrator | 2025-05-13 23:37:45.336259 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-05-13 23:37:45.336263 | orchestrator | Tuesday 13 May 2025 23:36:27 +0000 (0:00:00.618) 0:00:01.784 *********** 2025-05-13 23:37:45.336267 | orchestrator | changed: [testbed-manager] 2025-05-13 23:37:45.336271 | orchestrator | 2025-05-13 23:37:45.336275 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-05-13 23:37:45.336279 | orchestrator | Tuesday 13 May 2025 23:36:28 +0000 (0:00:01.117) 0:00:02.901 *********** 2025-05-13 23:37:45.336283 | orchestrator | changed: [testbed-manager] 2025-05-13 23:37:45.336287 | orchestrator | 2025-05-13 23:37:45.336291 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-05-13 23:37:45.336295 | orchestrator | Tuesday 13 May 2025 23:36:29 +0000 (0:00:00.662) 0:00:03.564 *********** 2025-05-13 23:37:45.336302 | orchestrator | changed: [testbed-manager -> localhost] 2025-05-13 23:37:45.336306 | orchestrator | 2025-05-13 23:37:45.336310 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-05-13 23:37:45.336314 | orchestrator | Tuesday 13 May 2025 23:36:31 +0000 (0:00:01.548) 0:00:05.113 *********** 2025-05-13 23:37:45.336318 | orchestrator | changed: [testbed-manager -> localhost] 2025-05-13 23:37:45.336322 | orchestrator | 2025-05-13 23:37:45.336325 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-05-13 23:37:45.336329 | orchestrator | Tuesday 13 May 2025 23:36:31 +0000 (0:00:00.759) 0:00:05.873 *********** 2025-05-13 23:37:45.336333 | orchestrator | ok: [testbed-manager] 2025-05-13 23:37:45.336337 | orchestrator | 2025-05-13 23:37:45.336341 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-05-13 23:37:45.336345 | orchestrator | Tuesday 13 May 2025 23:36:32 +0000 (0:00:00.337) 0:00:06.210 *********** 2025-05-13 23:37:45.336348 | orchestrator | ok: [testbed-manager] 2025-05-13 23:37:45.336352 | orchestrator | 2025-05-13 23:37:45.336356 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 23:37:45.336360 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 23:37:45.336364 | orchestrator | 2025-05-13 23:37:45.336368 | orchestrator | 2025-05-13 23:37:45.336372 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 23:37:45.336375 | orchestrator | Tuesday 13 May 2025 23:36:32 +0000 (0:00:00.275) 0:00:06.486 *********** 2025-05-13 23:37:45.336379 | orchestrator | =============================================================================== 2025-05-13 23:37:45.336383 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.55s 2025-05-13 23:37:45.336393 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.12s 2025-05-13 23:37:45.336397 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.76s 2025-05-13 23:37:45.336407 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.66s 2025-05-13 23:37:45.336411 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.62s 2025-05-13 23:37:45.336415 | orchestrator | Create .kube directory -------------------------------------------------- 0.53s 2025-05-13 23:37:45.336419 | orchestrator | Get home directory of operator user ------------------------------------- 0.50s 2025-05-13 23:37:45.336423 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.34s 2025-05-13 23:37:45.336427 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.28s 2025-05-13 23:37:45.336431 | orchestrator | 2025-05-13 23:37:45.336434 | orchestrator | 2025-05-13 23:37:45.336438 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-05-13 23:37:45.336442 | orchestrator | 2025-05-13 23:37:45.336446 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-05-13 23:37:45.336450 | orchestrator | Tuesday 13 May 2025 23:35:21 +0000 (0:00:00.120) 0:00:00.120 *********** 2025-05-13 23:37:45.336454 | orchestrator | ok: [localhost] => { 2025-05-13 23:37:45.336458 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-05-13 23:37:45.336462 | orchestrator | } 2025-05-13 23:37:45.336467 | orchestrator | 2025-05-13 23:37:45.336471 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-05-13 23:37:45.336475 | orchestrator | Tuesday 13 May 2025 23:35:21 +0000 (0:00:00.123) 0:00:00.244 *********** 2025-05-13 23:37:45.336480 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-05-13 23:37:45.336485 | orchestrator | ...ignoring 2025-05-13 23:37:45.336489 | orchestrator | 2025-05-13 23:37:45.336493 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-05-13 23:37:45.336496 | orchestrator | Tuesday 13 May 2025 23:35:24 +0000 (0:00:03.200) 0:00:03.445 *********** 2025-05-13 23:37:45.336503 | orchestrator | skipping: [localhost] 2025-05-13 23:37:45.336507 | orchestrator | 2025-05-13 23:37:45.336511 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-05-13 23:37:45.336515 | orchestrator | Tuesday 13 May 2025 23:35:24 +0000 (0:00:00.123) 0:00:03.568 *********** 2025-05-13 23:37:45.336519 | orchestrator | ok: [localhost] 2025-05-13 23:37:45.336523 | orchestrator | 2025-05-13 23:37:45.336527 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-13 23:37:45.336531 | orchestrator | 2025-05-13 23:37:45.336535 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-13 23:37:45.336539 | orchestrator | Tuesday 13 May 2025 23:35:24 +0000 (0:00:00.463) 0:00:04.032 *********** 2025-05-13 23:37:45.336543 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:37:45.336547 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:37:45.336551 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:37:45.336555 | orchestrator | 2025-05-13 23:37:45.336559 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-13 23:37:45.336562 | orchestrator | Tuesday 13 May 2025 23:35:26 +0000 (0:00:01.144) 0:00:05.177 *********** 2025-05-13 23:37:45.336566 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-05-13 23:37:45.336571 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-05-13 23:37:45.336574 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-05-13 23:37:45.336578 | orchestrator | 2025-05-13 23:37:45.336582 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-05-13 23:37:45.336586 | orchestrator | 2025-05-13 23:37:45.336590 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-05-13 23:37:45.336594 | orchestrator | Tuesday 13 May 2025 23:35:27 +0000 (0:00:01.117) 0:00:06.295 *********** 2025-05-13 23:37:45.336598 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:37:45.336602 | orchestrator | 2025-05-13 23:37:45.336606 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-05-13 23:37:45.336610 | orchestrator | Tuesday 13 May 2025 23:35:28 +0000 (0:00:00.916) 0:00:07.211 *********** 2025-05-13 23:37:45.336614 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:37:45.336618 | orchestrator | 2025-05-13 23:37:45.336622 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-05-13 23:37:45.336626 | orchestrator | Tuesday 13 May 2025 23:35:29 +0000 (0:00:01.134) 0:00:08.346 *********** 2025-05-13 23:37:45.336629 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:37:45.336633 | orchestrator | 2025-05-13 23:37:45.336637 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-05-13 23:37:45.336641 | orchestrator | Tuesday 13 May 2025 23:35:29 +0000 (0:00:00.481) 0:00:08.828 *********** 2025-05-13 23:37:45.336645 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:37:45.336649 | orchestrator | 2025-05-13 23:37:45.336653 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-05-13 23:37:45.336657 | orchestrator | Tuesday 13 May 2025 23:35:30 +0000 (0:00:00.648) 0:00:09.476 *********** 2025-05-13 23:37:45.336661 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:37:45.336664 | orchestrator | 2025-05-13 23:37:45.336668 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-05-13 23:37:45.336672 | orchestrator | Tuesday 13 May 2025 23:35:30 +0000 (0:00:00.511) 0:00:09.988 *********** 2025-05-13 23:37:45.336676 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:37:45.336680 | orchestrator | 2025-05-13 23:37:45.336684 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-05-13 23:37:45.336723 | orchestrator | Tuesday 13 May 2025 23:35:31 +0000 (0:00:00.828) 0:00:10.816 *********** 2025-05-13 23:37:45.336731 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:37:45.336735 | orchestrator | 2025-05-13 23:37:45.336740 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-05-13 23:37:45.336750 | orchestrator | Tuesday 13 May 2025 23:35:32 +0000 (0:00:00.666) 0:00:11.483 *********** 2025-05-13 23:37:45.336754 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:37:45.336758 | orchestrator | 2025-05-13 23:37:45.336762 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-05-13 23:37:45.336767 | orchestrator | Tuesday 13 May 2025 23:35:33 +0000 (0:00:00.893) 0:00:12.376 *********** 2025-05-13 23:37:45.336771 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:37:45.336775 | orchestrator | 2025-05-13 23:37:45.336779 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-05-13 23:37:45.336783 | orchestrator | Tuesday 13 May 2025 23:35:33 +0000 (0:00:00.365) 0:00:12.742 *********** 2025-05-13 23:37:45.336787 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:37:45.336791 | orchestrator | 2025-05-13 23:37:45.336795 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-05-13 23:37:45.336800 | orchestrator | Tuesday 13 May 2025 23:35:34 +0000 (0:00:00.391) 0:00:13.133 *********** 2025-05-13 23:37:45.336808 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-13 23:37:45.336815 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-13 23:37:45.336821 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-13 23:37:45.336828 | orchestrator | 2025-05-13 23:37:45.336832 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-05-13 23:37:45.336843 | orchestrator | Tuesday 13 May 2025 23:35:35 +0000 (0:00:01.097) 0:00:14.231 *********** 2025-05-13 23:37:45.336851 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-13 23:37:45.336856 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-13 23:37:45.336861 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-13 23:37:45.336865 | orchestrator | 2025-05-13 23:37:45.336871 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-05-13 23:37:45.336877 | orchestrator | Tuesday 13 May 2025 23:35:37 +0000 (0:00:02.011) 0:00:16.242 *********** 2025-05-13 23:37:45.336884 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-05-13 23:37:45.336894 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-05-13 23:37:45.336901 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-05-13 23:37:45.336907 | orchestrator | 2025-05-13 23:37:45.336913 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-05-13 23:37:45.336919 | orchestrator | Tuesday 13 May 2025 23:35:40 +0000 (0:00:03.295) 0:00:19.537 *********** 2025-05-13 23:37:45.336925 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-05-13 23:37:45.336931 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-05-13 23:37:45.336939 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-05-13 23:37:45.336945 | orchestrator | 2025-05-13 23:37:45.336950 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-05-13 23:37:45.336959 | orchestrator | Tuesday 13 May 2025 23:35:42 +0000 (0:00:02.151) 0:00:21.688 *********** 2025-05-13 23:37:45.336965 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-05-13 23:37:45.336971 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-05-13 23:37:45.336976 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-05-13 23:37:45.336982 | orchestrator | 2025-05-13 23:37:45.336988 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-05-13 23:37:45.336994 | orchestrator | Tuesday 13 May 2025 23:35:44 +0000 (0:00:01.786) 0:00:23.475 *********** 2025-05-13 23:37:45.337000 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-05-13 23:37:45.337006 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-05-13 23:37:45.337012 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-05-13 23:37:45.337018 | orchestrator | 2025-05-13 23:37:45.337024 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-05-13 23:37:45.337030 | orchestrator | Tuesday 13 May 2025 23:35:46 +0000 (0:00:02.133) 0:00:25.608 *********** 2025-05-13 23:37:45.337036 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-05-13 23:37:45.337043 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-05-13 23:37:45.337049 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-05-13 23:37:45.337055 | orchestrator | 2025-05-13 23:37:45.337061 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-05-13 23:37:45.337068 | orchestrator | Tuesday 13 May 2025 23:35:48 +0000 (0:00:01.817) 0:00:27.426 *********** 2025-05-13 23:37:45.337074 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-05-13 23:37:45.337080 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-05-13 23:37:45.337086 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-05-13 23:37:45.337092 | orchestrator | 2025-05-13 23:37:45.337098 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-05-13 23:37:45.337102 | orchestrator | Tuesday 13 May 2025 23:35:50 +0000 (0:00:02.263) 0:00:29.690 *********** 2025-05-13 23:37:45.337106 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:37:45.337109 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:37:45.337113 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:37:45.337117 | orchestrator | 2025-05-13 23:37:45.337120 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-05-13 23:37:45.337128 | orchestrator | Tuesday 13 May 2025 23:35:51 +0000 (0:00:00.885) 0:00:30.575 *********** 2025-05-13 23:37:45.337133 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-13 23:37:45.337144 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-13 23:37:45.337148 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-13 23:37:45.337153 | orchestrator | 2025-05-13 23:37:45.337156 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-05-13 23:37:45.337160 | orchestrator | Tuesday 13 May 2025 23:35:53 +0000 (0:00:01.908) 0:00:32.484 *********** 2025-05-13 23:37:45.337164 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:37:45.337167 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:37:45.337171 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:37:45.337175 | orchestrator | 2025-05-13 23:37:45.337179 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-05-13 23:37:45.337182 | orchestrator | Tuesday 13 May 2025 23:35:54 +0000 (0:00:01.031) 0:00:33.515 *********** 2025-05-13 23:37:45.337186 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:37:45.337192 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:37:45.337196 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:37:45.337200 | orchestrator | 2025-05-13 23:37:45.337203 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-05-13 23:37:45.337207 | orchestrator | Tuesday 13 May 2025 23:36:02 +0000 (0:00:08.329) 0:00:41.844 *********** 2025-05-13 23:37:45.337211 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:37:45.337214 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:37:45.337218 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:37:45.337221 | orchestrator | 2025-05-13 23:37:45.337225 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-05-13 23:37:45.337229 | orchestrator | 2025-05-13 23:37:45.337232 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-05-13 23:37:45.337236 | orchestrator | Tuesday 13 May 2025 23:36:03 +0000 (0:00:00.417) 0:00:42.262 *********** 2025-05-13 23:37:45.337240 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:37:45.337243 | orchestrator | 2025-05-13 23:37:45.337247 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-05-13 23:37:45.337250 | orchestrator | Tuesday 13 May 2025 23:36:04 +0000 (0:00:00.905) 0:00:43.167 *********** 2025-05-13 23:37:45.337254 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:37:45.337258 | orchestrator | 2025-05-13 23:37:45.337261 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-05-13 23:37:45.337265 | orchestrator | Tuesday 13 May 2025 23:36:05 +0000 (0:00:01.128) 0:00:44.296 *********** 2025-05-13 23:37:45.337269 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:37:45.337273 | orchestrator | 2025-05-13 23:37:45.337276 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-05-13 23:37:45.337280 | orchestrator | Tuesday 13 May 2025 23:36:13 +0000 (0:00:07.890) 0:00:52.186 *********** 2025-05-13 23:37:45.337284 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:37:45.337287 | orchestrator | 2025-05-13 23:37:45.337291 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-05-13 23:37:45.337294 | orchestrator | 2025-05-13 23:37:45.337298 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-05-13 23:37:45.337302 | orchestrator | Tuesday 13 May 2025 23:37:04 +0000 (0:00:51.179) 0:01:43.365 *********** 2025-05-13 23:37:45.337305 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:37:45.337309 | orchestrator | 2025-05-13 23:37:45.337312 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-05-13 23:37:45.337316 | orchestrator | Tuesday 13 May 2025 23:37:04 +0000 (0:00:00.690) 0:01:44.056 *********** 2025-05-13 23:37:45.337320 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:37:45.337323 | orchestrator | 2025-05-13 23:37:45.337327 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-05-13 23:37:45.337330 | orchestrator | Tuesday 13 May 2025 23:37:05 +0000 (0:00:00.521) 0:01:44.577 *********** 2025-05-13 23:37:45.337334 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:37:45.337338 | orchestrator | 2025-05-13 23:37:45.337341 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-05-13 23:37:45.337345 | orchestrator | Tuesday 13 May 2025 23:37:07 +0000 (0:00:01.893) 0:01:46.471 *********** 2025-05-13 23:37:45.337349 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:37:45.337352 | orchestrator | 2025-05-13 23:37:45.337356 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-05-13 23:37:45.337359 | orchestrator | 2025-05-13 23:37:45.337365 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-05-13 23:37:45.337369 | orchestrator | Tuesday 13 May 2025 23:37:22 +0000 (0:00:14.708) 0:02:01.179 *********** 2025-05-13 23:37:45.337373 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:37:45.337376 | orchestrator | 2025-05-13 23:37:45.337382 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-05-13 23:37:45.337386 | orchestrator | Tuesday 13 May 2025 23:37:22 +0000 (0:00:00.607) 0:02:01.787 *********** 2025-05-13 23:37:45.337393 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:37:45.337397 | orchestrator | 2025-05-13 23:37:45.337400 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-05-13 23:37:45.337404 | orchestrator | Tuesday 13 May 2025 23:37:22 +0000 (0:00:00.242) 0:02:02.030 *********** 2025-05-13 23:37:45.337408 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:37:45.337411 | orchestrator | 2025-05-13 23:37:45.337415 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-05-13 23:37:45.337418 | orchestrator | Tuesday 13 May 2025 23:37:29 +0000 (0:00:06.631) 0:02:08.661 *********** 2025-05-13 23:37:45.337422 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:37:45.337426 | orchestrator | 2025-05-13 23:37:45.337429 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-05-13 23:37:45.337433 | orchestrator | 2025-05-13 23:37:45.337437 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-05-13 23:37:45.337440 | orchestrator | Tuesday 13 May 2025 23:37:39 +0000 (0:00:10.387) 0:02:19.049 *********** 2025-05-13 23:37:45.337444 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:37:45.337448 | orchestrator | 2025-05-13 23:37:45.337451 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-05-13 23:37:45.337455 | orchestrator | Tuesday 13 May 2025 23:37:40 +0000 (0:00:00.925) 0:02:19.974 *********** 2025-05-13 23:37:45.337459 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-05-13 23:37:45.337462 | orchestrator | enable_outward_rabbitmq_True 2025-05-13 23:37:45.337466 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-05-13 23:37:45.337470 | orchestrator | outward_rabbitmq_restart 2025-05-13 23:37:45.337473 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:37:45.337477 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:37:45.337481 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:37:45.337488 | orchestrator | 2025-05-13 23:37:45.337494 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-05-13 23:37:45.337500 | orchestrator | skipping: no hosts matched 2025-05-13 23:37:45.337506 | orchestrator | 2025-05-13 23:37:45.337512 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-05-13 23:37:45.337518 | orchestrator | skipping: no hosts matched 2025-05-13 23:37:45.337524 | orchestrator | 2025-05-13 23:37:45.337530 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-05-13 23:37:45.337536 | orchestrator | skipping: no hosts matched 2025-05-13 23:37:45.337542 | orchestrator | 2025-05-13 23:37:45.337549 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 23:37:45.337554 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-05-13 23:37:45.337558 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-13 23:37:45.337562 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-13 23:37:45.337566 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-13 23:37:45.337569 | orchestrator | 2025-05-13 23:37:45.337573 | orchestrator | 2025-05-13 23:37:45.337577 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 23:37:45.337581 | orchestrator | Tuesday 13 May 2025 23:37:43 +0000 (0:00:02.376) 0:02:22.351 *********** 2025-05-13 23:37:45.337584 | orchestrator | =============================================================================== 2025-05-13 23:37:45.337588 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 76.27s 2025-05-13 23:37:45.337592 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 16.42s 2025-05-13 23:37:45.337599 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 8.33s 2025-05-13 23:37:45.337602 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 3.30s 2025-05-13 23:37:45.337606 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.20s 2025-05-13 23:37:45.337610 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.38s 2025-05-13 23:37:45.337613 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 2.26s 2025-05-13 23:37:45.337617 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.20s 2025-05-13 23:37:45.337621 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.15s 2025-05-13 23:37:45.337625 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.13s 2025-05-13 23:37:45.337628 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.01s 2025-05-13 23:37:45.337632 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.91s 2025-05-13 23:37:45.337635 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 1.89s 2025-05-13 23:37:45.337639 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.82s 2025-05-13 23:37:45.337643 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.79s 2025-05-13 23:37:45.337649 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.14s 2025-05-13 23:37:45.337653 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.13s 2025-05-13 23:37:45.337659 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.12s 2025-05-13 23:37:45.337663 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.10s 2025-05-13 23:37:45.337667 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 1.03s 2025-05-13 23:37:45.337727 | orchestrator | 2025-05-13 23:37:45 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:37:48.385485 | orchestrator | 2025-05-13 23:37:48 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:37:48.386361 | orchestrator | 2025-05-13 23:37:48 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:37:48.387654 | orchestrator | 2025-05-13 23:37:48 | INFO  | Task bd4c2ed2-1b3d-48bb-863b-655b659cf7e5 is in state STARTED 2025-05-13 23:37:48.387672 | orchestrator | 2025-05-13 23:37:48 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:37:51.443773 | orchestrator | 2025-05-13 23:37:51 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:37:51.443845 | orchestrator | 2025-05-13 23:37:51 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:37:51.443851 | orchestrator | 2025-05-13 23:37:51 | INFO  | Task bd4c2ed2-1b3d-48bb-863b-655b659cf7e5 is in state STARTED 2025-05-13 23:37:51.443855 | orchestrator | 2025-05-13 23:37:51 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:37:54.479550 | orchestrator | 2025-05-13 23:37:54 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:37:54.481526 | orchestrator | 2025-05-13 23:37:54 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:37:54.482484 | orchestrator | 2025-05-13 23:37:54 | INFO  | Task bd4c2ed2-1b3d-48bb-863b-655b659cf7e5 is in state STARTED 2025-05-13 23:37:54.482512 | orchestrator | 2025-05-13 23:37:54 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:37:57.527828 | orchestrator | 2025-05-13 23:37:57 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:37:57.531032 | orchestrator | 2025-05-13 23:37:57 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:37:57.532942 | orchestrator | 2025-05-13 23:37:57 | INFO  | Task bd4c2ed2-1b3d-48bb-863b-655b659cf7e5 is in state STARTED 2025-05-13 23:37:57.532992 | orchestrator | 2025-05-13 23:37:57 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:38:00.587452 | orchestrator | 2025-05-13 23:38:00 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:38:00.587915 | orchestrator | 2025-05-13 23:38:00 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:38:00.588959 | orchestrator | 2025-05-13 23:38:00 | INFO  | Task bd4c2ed2-1b3d-48bb-863b-655b659cf7e5 is in state STARTED 2025-05-13 23:38:00.589003 | orchestrator | 2025-05-13 23:38:00 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:38:03.638391 | orchestrator | 2025-05-13 23:38:03 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:38:03.640022 | orchestrator | 2025-05-13 23:38:03 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:38:03.642552 | orchestrator | 2025-05-13 23:38:03 | INFO  | Task bd4c2ed2-1b3d-48bb-863b-655b659cf7e5 is in state STARTED 2025-05-13 23:38:03.642584 | orchestrator | 2025-05-13 23:38:03 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:38:06.697611 | orchestrator | 2025-05-13 23:38:06 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:38:06.699720 | orchestrator | 2025-05-13 23:38:06 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:38:06.701461 | orchestrator | 2025-05-13 23:38:06 | INFO  | Task bd4c2ed2-1b3d-48bb-863b-655b659cf7e5 is in state STARTED 2025-05-13 23:38:06.701489 | orchestrator | 2025-05-13 23:38:06 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:38:09.753912 | orchestrator | 2025-05-13 23:38:09 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:38:09.754104 | orchestrator | 2025-05-13 23:38:09 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:38:09.755811 | orchestrator | 2025-05-13 23:38:09 | INFO  | Task bd4c2ed2-1b3d-48bb-863b-655b659cf7e5 is in state STARTED 2025-05-13 23:38:09.756531 | orchestrator | 2025-05-13 23:38:09 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:38:12.811389 | orchestrator | 2025-05-13 23:38:12 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:38:12.814068 | orchestrator | 2025-05-13 23:38:12 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:38:12.817526 | orchestrator | 2025-05-13 23:38:12 | INFO  | Task bd4c2ed2-1b3d-48bb-863b-655b659cf7e5 is in state STARTED 2025-05-13 23:38:12.817598 | orchestrator | 2025-05-13 23:38:12 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:38:15.853802 | orchestrator | 2025-05-13 23:38:15 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:38:15.854377 | orchestrator | 2025-05-13 23:38:15 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:38:15.855010 | orchestrator | 2025-05-13 23:38:15 | INFO  | Task bd4c2ed2-1b3d-48bb-863b-655b659cf7e5 is in state STARTED 2025-05-13 23:38:15.855063 | orchestrator | 2025-05-13 23:38:15 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:38:18.893821 | orchestrator | 2025-05-13 23:38:18 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:38:18.894264 | orchestrator | 2025-05-13 23:38:18 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:38:18.895326 | orchestrator | 2025-05-13 23:38:18 | INFO  | Task bd4c2ed2-1b3d-48bb-863b-655b659cf7e5 is in state STARTED 2025-05-13 23:38:18.895377 | orchestrator | 2025-05-13 23:38:18 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:38:21.950620 | orchestrator | 2025-05-13 23:38:21 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:38:21.952123 | orchestrator | 2025-05-13 23:38:21 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:38:21.954203 | orchestrator | 2025-05-13 23:38:21 | INFO  | Task bd4c2ed2-1b3d-48bb-863b-655b659cf7e5 is in state STARTED 2025-05-13 23:38:21.954240 | orchestrator | 2025-05-13 23:38:21 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:38:25.000753 | orchestrator | 2025-05-13 23:38:24 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:38:25.002934 | orchestrator | 2025-05-13 23:38:24 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:38:25.013527 | orchestrator | 2025-05-13 23:38:25 | INFO  | Task bd4c2ed2-1b3d-48bb-863b-655b659cf7e5 is in state STARTED 2025-05-13 23:38:25.013592 | orchestrator | 2025-05-13 23:38:25 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:38:28.060791 | orchestrator | 2025-05-13 23:38:28 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:38:28.062572 | orchestrator | 2025-05-13 23:38:28 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:38:28.064603 | orchestrator | 2025-05-13 23:38:28 | INFO  | Task bd4c2ed2-1b3d-48bb-863b-655b659cf7e5 is in state STARTED 2025-05-13 23:38:28.064642 | orchestrator | 2025-05-13 23:38:28 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:38:31.104673 | orchestrator | 2025-05-13 23:38:31 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:38:31.104870 | orchestrator | 2025-05-13 23:38:31 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:38:31.104977 | orchestrator | 2025-05-13 23:38:31 | INFO  | Task bd4c2ed2-1b3d-48bb-863b-655b659cf7e5 is in state STARTED 2025-05-13 23:38:31.104994 | orchestrator | 2025-05-13 23:38:31 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:38:34.161518 | orchestrator | 2025-05-13 23:38:34 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:38:34.165280 | orchestrator | 2025-05-13 23:38:34 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:38:34.168433 | orchestrator | 2025-05-13 23:38:34 | INFO  | Task bd4c2ed2-1b3d-48bb-863b-655b659cf7e5 is in state STARTED 2025-05-13 23:38:34.169148 | orchestrator | 2025-05-13 23:38:34 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:38:37.220860 | orchestrator | 2025-05-13 23:38:37 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:38:37.222576 | orchestrator | 2025-05-13 23:38:37 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:38:37.224589 | orchestrator | 2025-05-13 23:38:37 | INFO  | Task bd4c2ed2-1b3d-48bb-863b-655b659cf7e5 is in state STARTED 2025-05-13 23:38:37.224608 | orchestrator | 2025-05-13 23:38:37 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:38:40.268458 | orchestrator | 2025-05-13 23:38:40 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:38:40.270741 | orchestrator | 2025-05-13 23:38:40 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:38:40.274419 | orchestrator | 2025-05-13 23:38:40 | INFO  | Task bd4c2ed2-1b3d-48bb-863b-655b659cf7e5 is in state STARTED 2025-05-13 23:38:40.274429 | orchestrator | 2025-05-13 23:38:40 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:38:43.325462 | orchestrator | 2025-05-13 23:38:43 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:38:43.326815 | orchestrator | 2025-05-13 23:38:43 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:38:43.330649 | orchestrator | 2025-05-13 23:38:43 | INFO  | Task bd4c2ed2-1b3d-48bb-863b-655b659cf7e5 is in state STARTED 2025-05-13 23:38:43.331260 | orchestrator | 2025-05-13 23:38:43 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:38:46.387885 | orchestrator | 2025-05-13 23:38:46 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:38:46.388684 | orchestrator | 2025-05-13 23:38:46 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:38:46.389962 | orchestrator | 2025-05-13 23:38:46 | INFO  | Task bd4c2ed2-1b3d-48bb-863b-655b659cf7e5 is in state STARTED 2025-05-13 23:38:46.390083 | orchestrator | 2025-05-13 23:38:46 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:38:49.452875 | orchestrator | 2025-05-13 23:38:49 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:38:49.453035 | orchestrator | 2025-05-13 23:38:49 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:38:49.460287 | orchestrator | 2025-05-13 23:38:49 | INFO  | Task bd4c2ed2-1b3d-48bb-863b-655b659cf7e5 is in state STARTED 2025-05-13 23:38:49.460350 | orchestrator | 2025-05-13 23:38:49 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:38:52.501261 | orchestrator | 2025-05-13 23:38:52 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:38:52.504931 | orchestrator | 2025-05-13 23:38:52 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:38:52.505872 | orchestrator | 2025-05-13 23:38:52 | INFO  | Task bd4c2ed2-1b3d-48bb-863b-655b659cf7e5 is in state STARTED 2025-05-13 23:38:52.505898 | orchestrator | 2025-05-13 23:38:52 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:38:55.553624 | orchestrator | 2025-05-13 23:38:55 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:38:55.556426 | orchestrator | 2025-05-13 23:38:55 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:38:55.563970 | orchestrator | 2025-05-13 23:38:55 | INFO  | Task bd4c2ed2-1b3d-48bb-863b-655b659cf7e5 is in state SUCCESS 2025-05-13 23:38:55.567288 | orchestrator | 2025-05-13 23:38:55.567340 | orchestrator | 2025-05-13 23:38:55.567353 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-13 23:38:55.567364 | orchestrator | 2025-05-13 23:38:55.567376 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-13 23:38:55.567387 | orchestrator | Tuesday 13 May 2025 23:36:18 +0000 (0:00:00.218) 0:00:00.218 *********** 2025-05-13 23:38:55.567398 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:38:55.567411 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:38:55.567422 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:38:55.567432 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:38:55.567443 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:38:55.567454 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:38:55.567464 | orchestrator | 2025-05-13 23:38:55.567475 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-13 23:38:55.567486 | orchestrator | Tuesday 13 May 2025 23:36:19 +0000 (0:00:00.607) 0:00:00.826 *********** 2025-05-13 23:38:55.567497 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-05-13 23:38:55.567508 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-05-13 23:38:55.567519 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-05-13 23:38:55.567550 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-05-13 23:38:55.567561 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-05-13 23:38:55.567572 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-05-13 23:38:55.567583 | orchestrator | 2025-05-13 23:38:55.567593 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-05-13 23:38:55.567604 | orchestrator | 2025-05-13 23:38:55.567683 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-05-13 23:38:55.567695 | orchestrator | Tuesday 13 May 2025 23:36:20 +0000 (0:00:01.548) 0:00:02.375 *********** 2025-05-13 23:38:55.567748 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:38:55.567763 | orchestrator | 2025-05-13 23:38:55.567786 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-05-13 23:38:55.567798 | orchestrator | Tuesday 13 May 2025 23:36:22 +0000 (0:00:01.618) 0:00:03.993 *********** 2025-05-13 23:38:55.567812 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.567826 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.567838 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.567849 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.567860 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.567871 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.567882 | orchestrator | 2025-05-13 23:38:55.567908 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-05-13 23:38:55.567919 | orchestrator | Tuesday 13 May 2025 23:36:23 +0000 (0:00:01.532) 0:00:05.525 *********** 2025-05-13 23:38:55.567940 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.567952 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.567963 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.567975 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.567986 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.567997 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.568009 | orchestrator | 2025-05-13 23:38:55.568019 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-05-13 23:38:55.568030 | orchestrator | Tuesday 13 May 2025 23:36:26 +0000 (0:00:02.581) 0:00:08.107 *********** 2025-05-13 23:38:55.568041 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.568053 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.568070 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.568088 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.568099 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.568184 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.568215 | orchestrator | 2025-05-13 23:38:55.568233 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-05-13 23:38:55.568257 | orchestrator | Tuesday 13 May 2025 23:36:27 +0000 (0:00:01.613) 0:00:09.720 *********** 2025-05-13 23:38:55.568277 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.568296 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.568316 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.568334 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.568351 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.568372 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.568383 | orchestrator | 2025-05-13 23:38:55.568405 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-05-13 23:38:55.568416 | orchestrator | Tuesday 13 May 2025 23:36:29 +0000 (0:00:01.845) 0:00:11.566 *********** 2025-05-13 23:38:55.568428 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.568439 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.568450 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.568466 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.568477 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.568489 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.568500 | orchestrator | 2025-05-13 23:38:55.568511 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-05-13 23:38:55.568522 | orchestrator | Tuesday 13 May 2025 23:36:31 +0000 (0:00:01.643) 0:00:13.209 *********** 2025-05-13 23:38:55.568533 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:38:55.568544 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:38:55.568555 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:38:55.568566 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:38:55.568576 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:38:55.568593 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:38:55.568604 | orchestrator | 2025-05-13 23:38:55.568615 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-05-13 23:38:55.568626 | orchestrator | Tuesday 13 May 2025 23:36:34 +0000 (0:00:02.606) 0:00:15.816 *********** 2025-05-13 23:38:55.568637 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-05-13 23:38:55.568648 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-05-13 23:38:55.568659 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-05-13 23:38:55.568670 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-05-13 23:38:55.568680 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-05-13 23:38:55.568691 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-05-13 23:38:55.568763 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-13 23:38:55.568778 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-13 23:38:55.568795 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-13 23:38:55.568806 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-13 23:38:55.568817 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-13 23:38:55.568827 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-13 23:38:55.568838 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-13 23:38:55.568850 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-13 23:38:55.568859 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-13 23:38:55.568869 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-13 23:38:55.568879 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-13 23:38:55.568888 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-13 23:38:55.568898 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-13 23:38:55.568908 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-13 23:38:55.568922 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-13 23:38:55.568932 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-13 23:38:55.568941 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-13 23:38:55.568950 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-13 23:38:55.568960 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-13 23:38:55.568969 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-13 23:38:55.568979 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-13 23:38:55.568995 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-13 23:38:55.569005 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-13 23:38:55.569015 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-13 23:38:55.569024 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-13 23:38:55.569033 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-13 23:38:55.569043 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-13 23:38:55.569053 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-13 23:38:55.569062 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-13 23:38:55.569072 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-05-13 23:38:55.569081 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-13 23:38:55.569090 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-05-13 23:38:55.569100 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-05-13 23:38:55.569109 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-05-13 23:38:55.569118 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-05-13 23:38:55.569128 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-05-13 23:38:55.569137 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-05-13 23:38:55.569147 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-05-13 23:38:55.569162 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-05-13 23:38:55.569172 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-05-13 23:38:55.569182 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-05-13 23:38:55.569191 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-05-13 23:38:55.569201 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-05-13 23:38:55.569210 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-05-13 23:38:55.569220 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-05-13 23:38:55.569229 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-05-13 23:38:55.569238 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-05-13 23:38:55.569248 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-05-13 23:38:55.569257 | orchestrator | 2025-05-13 23:38:55.569267 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-13 23:38:55.569282 | orchestrator | Tuesday 13 May 2025 23:36:52 +0000 (0:00:18.229) 0:00:34.045 *********** 2025-05-13 23:38:55.569297 | orchestrator | 2025-05-13 23:38:55.569321 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-13 23:38:55.569339 | orchestrator | Tuesday 13 May 2025 23:36:52 +0000 (0:00:00.068) 0:00:34.114 *********** 2025-05-13 23:38:55.569356 | orchestrator | 2025-05-13 23:38:55.569374 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-13 23:38:55.569391 | orchestrator | Tuesday 13 May 2025 23:36:52 +0000 (0:00:00.073) 0:00:34.187 *********** 2025-05-13 23:38:55.569410 | orchestrator | 2025-05-13 23:38:55.569429 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-13 23:38:55.569446 | orchestrator | Tuesday 13 May 2025 23:36:52 +0000 (0:00:00.068) 0:00:34.255 *********** 2025-05-13 23:38:55.569463 | orchestrator | 2025-05-13 23:38:55.569474 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-13 23:38:55.569483 | orchestrator | Tuesday 13 May 2025 23:36:52 +0000 (0:00:00.084) 0:00:34.340 *********** 2025-05-13 23:38:55.569493 | orchestrator | 2025-05-13 23:38:55.569502 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-13 23:38:55.569511 | orchestrator | Tuesday 13 May 2025 23:36:52 +0000 (0:00:00.068) 0:00:34.408 *********** 2025-05-13 23:38:55.569520 | orchestrator | 2025-05-13 23:38:55.569530 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-05-13 23:38:55.569539 | orchestrator | Tuesday 13 May 2025 23:36:52 +0000 (0:00:00.069) 0:00:34.477 *********** 2025-05-13 23:38:55.569548 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:38:55.569558 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:38:55.569568 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:38:55.569577 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:38:55.569587 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:38:55.569596 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:38:55.569606 | orchestrator | 2025-05-13 23:38:55.569615 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-05-13 23:38:55.569625 | orchestrator | Tuesday 13 May 2025 23:36:55 +0000 (0:00:02.307) 0:00:36.785 *********** 2025-05-13 23:38:55.569634 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:38:55.569644 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:38:55.569653 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:38:55.569662 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:38:55.569671 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:38:55.569681 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:38:55.569690 | orchestrator | 2025-05-13 23:38:55.569700 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-05-13 23:38:55.569740 | orchestrator | 2025-05-13 23:38:55.569750 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-05-13 23:38:55.569759 | orchestrator | Tuesday 13 May 2025 23:37:34 +0000 (0:00:39.947) 0:01:16.732 *********** 2025-05-13 23:38:55.569769 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:38:55.569778 | orchestrator | 2025-05-13 23:38:55.569788 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-05-13 23:38:55.569797 | orchestrator | Tuesday 13 May 2025 23:37:35 +0000 (0:00:00.704) 0:01:17.437 *********** 2025-05-13 23:38:55.569807 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:38:55.569816 | orchestrator | 2025-05-13 23:38:55.569826 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-05-13 23:38:55.569835 | orchestrator | Tuesday 13 May 2025 23:37:36 +0000 (0:00:00.681) 0:01:18.119 *********** 2025-05-13 23:38:55.569844 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:38:55.569854 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:38:55.569863 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:38:55.569873 | orchestrator | 2025-05-13 23:38:55.569882 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-05-13 23:38:55.569899 | orchestrator | Tuesday 13 May 2025 23:37:37 +0000 (0:00:00.838) 0:01:18.957 *********** 2025-05-13 23:38:55.569909 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:38:55.569919 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:38:55.569928 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:38:55.569944 | orchestrator | 2025-05-13 23:38:55.569954 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-05-13 23:38:55.569964 | orchestrator | Tuesday 13 May 2025 23:37:37 +0000 (0:00:00.340) 0:01:19.298 *********** 2025-05-13 23:38:55.569973 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:38:55.569983 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:38:55.569992 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:38:55.570002 | orchestrator | 2025-05-13 23:38:55.570011 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-05-13 23:38:55.570069 | orchestrator | Tuesday 13 May 2025 23:37:37 +0000 (0:00:00.333) 0:01:19.631 *********** 2025-05-13 23:38:55.570079 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:38:55.570088 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:38:55.570098 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:38:55.570107 | orchestrator | 2025-05-13 23:38:55.570117 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-05-13 23:38:55.570126 | orchestrator | Tuesday 13 May 2025 23:37:38 +0000 (0:00:00.528) 0:01:20.159 *********** 2025-05-13 23:38:55.570135 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:38:55.570144 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:38:55.570153 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:38:55.570163 | orchestrator | 2025-05-13 23:38:55.570172 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-05-13 23:38:55.570182 | orchestrator | Tuesday 13 May 2025 23:37:38 +0000 (0:00:00.331) 0:01:20.491 *********** 2025-05-13 23:38:55.570191 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:38:55.570200 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:38:55.570210 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:38:55.570219 | orchestrator | 2025-05-13 23:38:55.570229 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-05-13 23:38:55.570238 | orchestrator | Tuesday 13 May 2025 23:37:39 +0000 (0:00:00.288) 0:01:20.779 *********** 2025-05-13 23:38:55.570247 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:38:55.570257 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:38:55.570266 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:38:55.570275 | orchestrator | 2025-05-13 23:38:55.570284 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-05-13 23:38:55.570299 | orchestrator | Tuesday 13 May 2025 23:37:39 +0000 (0:00:00.279) 0:01:21.058 *********** 2025-05-13 23:38:55.570309 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:38:55.570318 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:38:55.570328 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:38:55.570338 | orchestrator | 2025-05-13 23:38:55.570347 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-05-13 23:38:55.570356 | orchestrator | Tuesday 13 May 2025 23:37:39 +0000 (0:00:00.524) 0:01:21.583 *********** 2025-05-13 23:38:55.570366 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:38:55.570375 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:38:55.570384 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:38:55.570394 | orchestrator | 2025-05-13 23:38:55.570403 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-05-13 23:38:55.570413 | orchestrator | Tuesday 13 May 2025 23:37:40 +0000 (0:00:00.340) 0:01:21.924 *********** 2025-05-13 23:38:55.570428 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:38:55.570445 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:38:55.570463 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:38:55.570480 | orchestrator | 2025-05-13 23:38:55.570497 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-05-13 23:38:55.570514 | orchestrator | Tuesday 13 May 2025 23:37:40 +0000 (0:00:00.468) 0:01:22.392 *********** 2025-05-13 23:38:55.570543 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:38:55.570561 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:38:55.570579 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:38:55.570589 | orchestrator | 2025-05-13 23:38:55.570598 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-05-13 23:38:55.570608 | orchestrator | Tuesday 13 May 2025 23:37:41 +0000 (0:00:00.493) 0:01:22.885 *********** 2025-05-13 23:38:55.570617 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:38:55.570627 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:38:55.570636 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:38:55.570646 | orchestrator | 2025-05-13 23:38:55.570655 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-05-13 23:38:55.570664 | orchestrator | Tuesday 13 May 2025 23:37:41 +0000 (0:00:00.570) 0:01:23.456 *********** 2025-05-13 23:38:55.570674 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:38:55.570683 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:38:55.570693 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:38:55.570728 | orchestrator | 2025-05-13 23:38:55.570739 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-05-13 23:38:55.570748 | orchestrator | Tuesday 13 May 2025 23:37:42 +0000 (0:00:00.314) 0:01:23.770 *********** 2025-05-13 23:38:55.570757 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:38:55.570767 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:38:55.570776 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:38:55.570785 | orchestrator | 2025-05-13 23:38:55.570795 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-05-13 23:38:55.570804 | orchestrator | Tuesday 13 May 2025 23:37:42 +0000 (0:00:00.321) 0:01:24.092 *********** 2025-05-13 23:38:55.570814 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:38:55.570823 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:38:55.570832 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:38:55.570842 | orchestrator | 2025-05-13 23:38:55.570882 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-05-13 23:38:55.570892 | orchestrator | Tuesday 13 May 2025 23:37:42 +0000 (0:00:00.298) 0:01:24.390 *********** 2025-05-13 23:38:55.570901 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:38:55.570911 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:38:55.570920 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:38:55.570929 | orchestrator | 2025-05-13 23:38:55.570939 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-05-13 23:38:55.570949 | orchestrator | Tuesday 13 May 2025 23:37:43 +0000 (0:00:00.508) 0:01:24.899 *********** 2025-05-13 23:38:55.570958 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:38:55.570968 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:38:55.570986 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:38:55.570996 | orchestrator | 2025-05-13 23:38:55.571006 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-05-13 23:38:55.571015 | orchestrator | Tuesday 13 May 2025 23:37:43 +0000 (0:00:00.298) 0:01:25.197 *********** 2025-05-13 23:38:55.571025 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:38:55.571034 | orchestrator | 2025-05-13 23:38:55.571044 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-05-13 23:38:55.571053 | orchestrator | Tuesday 13 May 2025 23:37:44 +0000 (0:00:00.597) 0:01:25.795 *********** 2025-05-13 23:38:55.571063 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:38:55.571072 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:38:55.571082 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:38:55.571091 | orchestrator | 2025-05-13 23:38:55.571100 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-05-13 23:38:55.571110 | orchestrator | Tuesday 13 May 2025 23:37:44 +0000 (0:00:00.909) 0:01:26.705 *********** 2025-05-13 23:38:55.571120 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:38:55.571137 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:38:55.571147 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:38:55.571156 | orchestrator | 2025-05-13 23:38:55.571166 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-05-13 23:38:55.571175 | orchestrator | Tuesday 13 May 2025 23:37:45 +0000 (0:00:00.692) 0:01:27.397 *********** 2025-05-13 23:38:55.571185 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:38:55.571194 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:38:55.571204 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:38:55.571213 | orchestrator | 2025-05-13 23:38:55.571223 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-05-13 23:38:55.571232 | orchestrator | Tuesday 13 May 2025 23:37:45 +0000 (0:00:00.339) 0:01:27.736 *********** 2025-05-13 23:38:55.571242 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:38:55.571251 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:38:55.571261 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:38:55.571270 | orchestrator | 2025-05-13 23:38:55.571286 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-05-13 23:38:55.571295 | orchestrator | Tuesday 13 May 2025 23:37:46 +0000 (0:00:00.342) 0:01:28.078 *********** 2025-05-13 23:38:55.571305 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:38:55.571315 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:38:55.571325 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:38:55.571334 | orchestrator | 2025-05-13 23:38:55.571344 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-05-13 23:38:55.571354 | orchestrator | Tuesday 13 May 2025 23:37:46 +0000 (0:00:00.554) 0:01:28.633 *********** 2025-05-13 23:38:55.571363 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:38:55.571372 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:38:55.571382 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:38:55.571392 | orchestrator | 2025-05-13 23:38:55.571401 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-05-13 23:38:55.571411 | orchestrator | Tuesday 13 May 2025 23:37:47 +0000 (0:00:00.353) 0:01:28.987 *********** 2025-05-13 23:38:55.571420 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:38:55.571429 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:38:55.571439 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:38:55.571448 | orchestrator | 2025-05-13 23:38:55.571458 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-05-13 23:38:55.571467 | orchestrator | Tuesday 13 May 2025 23:37:47 +0000 (0:00:00.307) 0:01:29.294 *********** 2025-05-13 23:38:55.571477 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:38:55.571486 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:38:55.571495 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:38:55.571505 | orchestrator | 2025-05-13 23:38:55.571514 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-05-13 23:38:55.571523 | orchestrator | Tuesday 13 May 2025 23:37:47 +0000 (0:00:00.329) 0:01:29.624 *********** 2025-05-13 23:38:55.571534 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.571546 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.571556 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.571628 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.571641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.571651 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.571661 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.571675 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.571686 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.571695 | orchestrator | 2025-05-13 23:38:55.571759 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-05-13 23:38:55.571771 | orchestrator | Tuesday 13 May 2025 23:37:49 +0000 (0:00:01.712) 0:01:31.336 *********** 2025-05-13 23:38:55.571781 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.571791 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.571801 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.571818 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.571834 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.571845 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.571855 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.571865 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.571880 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.571890 | orchestrator | 2025-05-13 23:38:55.571900 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-05-13 23:38:55.571909 | orchestrator | Tuesday 13 May 2025 23:37:53 +0000 (0:00:04.018) 0:01:35.355 *********** 2025-05-13 23:38:55.571919 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.571929 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.571939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.571956 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.571966 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.571982 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.571992 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.572002 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.572012 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.572021 | orchestrator | 2025-05-13 23:38:55.572031 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-13 23:38:55.572045 | orchestrator | Tuesday 13 May 2025 23:37:55 +0000 (0:00:02.222) 0:01:37.577 *********** 2025-05-13 23:38:55.572055 | orchestrator | 2025-05-13 23:38:55.572065 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-13 23:38:55.572075 | orchestrator | Tuesday 13 May 2025 23:37:55 +0000 (0:00:00.070) 0:01:37.647 *********** 2025-05-13 23:38:55.572084 | orchestrator | 2025-05-13 23:38:55.572094 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-13 23:38:55.572103 | orchestrator | Tuesday 13 May 2025 23:37:55 +0000 (0:00:00.067) 0:01:37.715 *********** 2025-05-13 23:38:55.572112 | orchestrator | 2025-05-13 23:38:55.572122 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-05-13 23:38:55.572132 | orchestrator | Tuesday 13 May 2025 23:37:56 +0000 (0:00:00.068) 0:01:37.784 *********** 2025-05-13 23:38:55.572141 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:38:55.572152 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:38:55.572161 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:38:55.572171 | orchestrator | 2025-05-13 23:38:55.572180 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-05-13 23:38:55.572190 | orchestrator | Tuesday 13 May 2025 23:38:04 +0000 (0:00:08.303) 0:01:46.088 *********** 2025-05-13 23:38:55.572205 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:38:55.572215 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:38:55.572224 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:38:55.572234 | orchestrator | 2025-05-13 23:38:55.572243 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-05-13 23:38:55.572253 | orchestrator | Tuesday 13 May 2025 23:38:07 +0000 (0:00:02.757) 0:01:48.845 *********** 2025-05-13 23:38:55.572262 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:38:55.572272 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:38:55.572280 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:38:55.572288 | orchestrator | 2025-05-13 23:38:55.572296 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-05-13 23:38:55.572304 | orchestrator | Tuesday 13 May 2025 23:38:14 +0000 (0:00:07.823) 0:01:56.669 *********** 2025-05-13 23:38:55.572311 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:38:55.572319 | orchestrator | 2025-05-13 23:38:55.572327 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-05-13 23:38:55.572335 | orchestrator | Tuesday 13 May 2025 23:38:15 +0000 (0:00:00.142) 0:01:56.811 *********** 2025-05-13 23:38:55.572343 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:38:55.572351 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:38:55.572359 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:38:55.572367 | orchestrator | 2025-05-13 23:38:55.572374 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-05-13 23:38:55.572382 | orchestrator | Tuesday 13 May 2025 23:38:16 +0000 (0:00:01.089) 0:01:57.901 *********** 2025-05-13 23:38:55.572390 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:38:55.572397 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:38:55.572405 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:38:55.572413 | orchestrator | 2025-05-13 23:38:55.572420 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-05-13 23:38:55.572428 | orchestrator | Tuesday 13 May 2025 23:38:16 +0000 (0:00:00.679) 0:01:58.580 *********** 2025-05-13 23:38:55.572436 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:38:55.572444 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:38:55.572452 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:38:55.572459 | orchestrator | 2025-05-13 23:38:55.572467 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-05-13 23:38:55.572475 | orchestrator | Tuesday 13 May 2025 23:38:17 +0000 (0:00:00.788) 0:01:59.368 *********** 2025-05-13 23:38:55.572483 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:38:55.572490 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:38:55.572498 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:38:55.572506 | orchestrator | 2025-05-13 23:38:55.572514 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-05-13 23:38:55.572527 | orchestrator | Tuesday 13 May 2025 23:38:18 +0000 (0:00:00.682) 0:02:00.051 *********** 2025-05-13 23:38:55.572541 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:38:55.572553 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:38:55.572570 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:38:55.572585 | orchestrator | 2025-05-13 23:38:55.572598 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-05-13 23:38:55.572610 | orchestrator | Tuesday 13 May 2025 23:38:19 +0000 (0:00:00.818) 0:02:00.870 *********** 2025-05-13 23:38:55.572621 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:38:55.572629 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:38:55.572637 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:38:55.572644 | orchestrator | 2025-05-13 23:38:55.572652 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-05-13 23:38:55.572662 | orchestrator | Tuesday 13 May 2025 23:38:20 +0000 (0:00:01.190) 0:02:02.061 *********** 2025-05-13 23:38:55.572676 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:38:55.572689 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:38:55.572720 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:38:55.572741 | orchestrator | 2025-05-13 23:38:55.572766 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-05-13 23:38:55.572780 | orchestrator | Tuesday 13 May 2025 23:38:20 +0000 (0:00:00.333) 0:02:02.394 *********** 2025-05-13 23:38:55.572793 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.572808 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.572822 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.572831 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.572840 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.572848 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.572856 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.572865 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.572881 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.572889 | orchestrator | 2025-05-13 23:38:55.572897 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-05-13 23:38:55.572910 | orchestrator | Tuesday 13 May 2025 23:38:22 +0000 (0:00:01.408) 0:02:03.803 *********** 2025-05-13 23:38:55.572919 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.572927 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.572935 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.572947 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.572955 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.572964 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.572972 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.572980 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.572989 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.572997 | orchestrator | 2025-05-13 23:38:55.573005 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-05-13 23:38:55.573012 | orchestrator | Tuesday 13 May 2025 23:38:25 +0000 (0:00:03.730) 0:02:07.533 *********** 2025-05-13 23:38:55.573030 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.573038 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.573046 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.573055 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.573067 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.573076 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.573084 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.573092 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.573101 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:38:55.573109 | orchestrator | 2025-05-13 23:38:55.573117 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-13 23:38:55.573125 | orchestrator | Tuesday 13 May 2025 23:38:28 +0000 (0:00:03.078) 0:02:10.612 *********** 2025-05-13 23:38:55.573132 | orchestrator | 2025-05-13 23:38:55.573140 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-13 23:38:55.573152 | orchestrator | Tuesday 13 May 2025 23:38:28 +0000 (0:00:00.075) 0:02:10.688 *********** 2025-05-13 23:38:55.573160 | orchestrator | 2025-05-13 23:38:55.573168 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-13 23:38:55.573176 | orchestrator | Tuesday 13 May 2025 23:38:28 +0000 (0:00:00.076) 0:02:10.765 *********** 2025-05-13 23:38:55.573183 | orchestrator | 2025-05-13 23:38:55.573191 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-05-13 23:38:55.573199 | orchestrator | Tuesday 13 May 2025 23:38:29 +0000 (0:00:00.086) 0:02:10.851 *********** 2025-05-13 23:38:55.573206 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:38:55.573214 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:38:55.573222 | orchestrator | 2025-05-13 23:38:55.573241 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-05-13 23:38:55.573255 | orchestrator | Tuesday 13 May 2025 23:38:35 +0000 (0:00:06.205) 0:02:17.057 *********** 2025-05-13 23:38:55.573267 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:38:55.573280 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:38:55.573292 | orchestrator | 2025-05-13 23:38:55.573304 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-05-13 23:38:55.573317 | orchestrator | Tuesday 13 May 2025 23:38:41 +0000 (0:00:06.147) 0:02:23.204 *********** 2025-05-13 23:38:55.573330 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:38:55.573344 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:38:55.573356 | orchestrator | 2025-05-13 23:38:55.573369 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-05-13 23:38:55.573378 | orchestrator | Tuesday 13 May 2025 23:38:47 +0000 (0:00:06.107) 0:02:29.311 *********** 2025-05-13 23:38:55.573386 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:38:55.573393 | orchestrator | 2025-05-13 23:38:55.573401 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-05-13 23:38:55.573409 | orchestrator | Tuesday 13 May 2025 23:38:47 +0000 (0:00:00.151) 0:02:29.462 *********** 2025-05-13 23:38:55.573416 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:38:55.573424 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:38:55.573432 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:38:55.573440 | orchestrator | 2025-05-13 23:38:55.573447 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-05-13 23:38:55.573455 | orchestrator | Tuesday 13 May 2025 23:38:48 +0000 (0:00:01.133) 0:02:30.596 *********** 2025-05-13 23:38:55.573462 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:38:55.573470 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:38:55.573478 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:38:55.573485 | orchestrator | 2025-05-13 23:38:55.573493 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-05-13 23:38:55.573501 | orchestrator | Tuesday 13 May 2025 23:38:49 +0000 (0:00:00.618) 0:02:31.215 *********** 2025-05-13 23:38:55.573508 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:38:55.573516 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:38:55.573524 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:38:55.573531 | orchestrator | 2025-05-13 23:38:55.573544 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-05-13 23:38:55.573558 | orchestrator | Tuesday 13 May 2025 23:38:50 +0000 (0:00:00.788) 0:02:32.003 *********** 2025-05-13 23:38:55.573571 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:38:55.573584 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:38:55.573601 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:38:55.573619 | orchestrator | 2025-05-13 23:38:55.573632 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-05-13 23:38:55.573646 | orchestrator | Tuesday 13 May 2025 23:38:51 +0000 (0:00:00.769) 0:02:32.773 *********** 2025-05-13 23:38:55.573659 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:38:55.573672 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:38:55.573686 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:38:55.573733 | orchestrator | 2025-05-13 23:38:55.573743 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-05-13 23:38:55.573750 | orchestrator | Tuesday 13 May 2025 23:38:52 +0000 (0:00:01.118) 0:02:33.892 *********** 2025-05-13 23:38:55.573758 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:38:55.573766 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:38:55.573774 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:38:55.573781 | orchestrator | 2025-05-13 23:38:55.573789 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 23:38:55.573797 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-05-13 23:38:55.573805 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-05-13 23:38:55.573813 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-05-13 23:38:55.573821 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 23:38:55.573829 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 23:38:55.573837 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 23:38:55.573845 | orchestrator | 2025-05-13 23:38:55.573853 | orchestrator | 2025-05-13 23:38:55.573861 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 23:38:55.573869 | orchestrator | Tuesday 13 May 2025 23:38:53 +0000 (0:00:01.248) 0:02:35.141 *********** 2025-05-13 23:38:55.573876 | orchestrator | =============================================================================== 2025-05-13 23:38:55.573884 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 39.95s 2025-05-13 23:38:55.573892 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 18.23s 2025-05-13 23:38:55.573899 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 14.51s 2025-05-13 23:38:55.573907 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 13.93s 2025-05-13 23:38:55.573914 | orchestrator | ovn-db : Restart ovn-sb-db container ------------------------------------ 8.90s 2025-05-13 23:38:55.573922 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.02s 2025-05-13 23:38:55.573930 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.73s 2025-05-13 23:38:55.573944 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.08s 2025-05-13 23:38:55.573953 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.61s 2025-05-13 23:38:55.573960 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 2.58s 2025-05-13 23:38:55.573968 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.31s 2025-05-13 23:38:55.573976 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.22s 2025-05-13 23:38:55.573984 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.85s 2025-05-13 23:38:55.573991 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.71s 2025-05-13 23:38:55.573999 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.64s 2025-05-13 23:38:55.574006 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.62s 2025-05-13 23:38:55.574014 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.61s 2025-05-13 23:38:55.574076 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.55s 2025-05-13 23:38:55.574084 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.53s 2025-05-13 23:38:55.574099 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.41s 2025-05-13 23:38:55.574107 | orchestrator | 2025-05-13 23:38:55 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:38:58.616132 | orchestrator | 2025-05-13 23:38:58 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:38:58.616257 | orchestrator | 2025-05-13 23:38:58 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:38:58.616690 | orchestrator | 2025-05-13 23:38:58 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:39:01.655021 | orchestrator | 2025-05-13 23:39:01 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:39:01.656796 | orchestrator | 2025-05-13 23:39:01 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:39:01.656837 | orchestrator | 2025-05-13 23:39:01 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:39:04.694360 | orchestrator | 2025-05-13 23:39:04 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:39:04.694861 | orchestrator | 2025-05-13 23:39:04 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:39:04.694891 | orchestrator | 2025-05-13 23:39:04 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:39:07.743471 | orchestrator | 2025-05-13 23:39:07 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:39:07.745099 | orchestrator | 2025-05-13 23:39:07 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:39:07.745132 | orchestrator | 2025-05-13 23:39:07 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:39:10.785510 | orchestrator | 2025-05-13 23:39:10 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:39:10.790431 | orchestrator | 2025-05-13 23:39:10 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:39:10.790481 | orchestrator | 2025-05-13 23:39:10 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:39:13.835545 | orchestrator | 2025-05-13 23:39:13 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:39:13.837298 | orchestrator | 2025-05-13 23:39:13 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:39:13.837332 | orchestrator | 2025-05-13 23:39:13 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:39:16.876768 | orchestrator | 2025-05-13 23:39:16 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:39:16.877644 | orchestrator | 2025-05-13 23:39:16 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:39:16.877903 | orchestrator | 2025-05-13 23:39:16 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:39:19.930181 | orchestrator | 2025-05-13 23:39:19 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:39:19.932414 | orchestrator | 2025-05-13 23:39:19 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:39:19.932440 | orchestrator | 2025-05-13 23:39:19 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:39:22.975839 | orchestrator | 2025-05-13 23:39:22 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:39:22.976127 | orchestrator | 2025-05-13 23:39:22 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:39:22.976397 | orchestrator | 2025-05-13 23:39:22 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:39:26.022895 | orchestrator | 2025-05-13 23:39:26 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:39:26.023596 | orchestrator | 2025-05-13 23:39:26 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:39:26.023631 | orchestrator | 2025-05-13 23:39:26 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:39:29.068128 | orchestrator | 2025-05-13 23:39:29 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:39:29.072009 | orchestrator | 2025-05-13 23:39:29 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:39:29.072101 | orchestrator | 2025-05-13 23:39:29 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:39:32.117499 | orchestrator | 2025-05-13 23:39:32 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:39:32.119009 | orchestrator | 2025-05-13 23:39:32 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:39:32.119041 | orchestrator | 2025-05-13 23:39:32 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:39:35.183859 | orchestrator | 2025-05-13 23:39:35 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:39:35.184043 | orchestrator | 2025-05-13 23:39:35 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:39:35.185481 | orchestrator | 2025-05-13 23:39:35 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:39:38.232263 | orchestrator | 2025-05-13 23:39:38 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:39:38.233293 | orchestrator | 2025-05-13 23:39:38 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:39:38.233329 | orchestrator | 2025-05-13 23:39:38 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:39:41.280842 | orchestrator | 2025-05-13 23:39:41 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:39:41.282115 | orchestrator | 2025-05-13 23:39:41 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:39:41.282130 | orchestrator | 2025-05-13 23:39:41 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:39:44.329887 | orchestrator | 2025-05-13 23:39:44 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:39:44.329961 | orchestrator | 2025-05-13 23:39:44 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:39:44.329967 | orchestrator | 2025-05-13 23:39:44 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:39:47.386913 | orchestrator | 2025-05-13 23:39:47 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:39:47.387011 | orchestrator | 2025-05-13 23:39:47 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:39:47.387027 | orchestrator | 2025-05-13 23:39:47 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:39:50.447139 | orchestrator | 2025-05-13 23:39:50 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:39:50.450762 | orchestrator | 2025-05-13 23:39:50 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:39:50.451587 | orchestrator | 2025-05-13 23:39:50 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:39:53.497659 | orchestrator | 2025-05-13 23:39:53 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:39:53.500601 | orchestrator | 2025-05-13 23:39:53 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:39:53.500941 | orchestrator | 2025-05-13 23:39:53 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:39:56.550981 | orchestrator | 2025-05-13 23:39:56 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:39:56.551069 | orchestrator | 2025-05-13 23:39:56 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:39:56.551082 | orchestrator | 2025-05-13 23:39:56 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:39:59.591304 | orchestrator | 2025-05-13 23:39:59 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:39:59.596897 | orchestrator | 2025-05-13 23:39:59 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:39:59.596986 | orchestrator | 2025-05-13 23:39:59 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:40:02.649859 | orchestrator | 2025-05-13 23:40:02 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:40:02.651443 | orchestrator | 2025-05-13 23:40:02 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:40:02.651551 | orchestrator | 2025-05-13 23:40:02 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:40:05.713225 | orchestrator | 2025-05-13 23:40:05 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:40:05.719108 | orchestrator | 2025-05-13 23:40:05 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:40:05.719171 | orchestrator | 2025-05-13 23:40:05 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:40:08.769670 | orchestrator | 2025-05-13 23:40:08 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:40:08.771350 | orchestrator | 2025-05-13 23:40:08 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:40:08.771445 | orchestrator | 2025-05-13 23:40:08 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:40:11.830240 | orchestrator | 2025-05-13 23:40:11 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:40:11.831316 | orchestrator | 2025-05-13 23:40:11 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:40:11.831374 | orchestrator | 2025-05-13 23:40:11 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:40:14.883948 | orchestrator | 2025-05-13 23:40:14 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:40:14.886372 | orchestrator | 2025-05-13 23:40:14 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:40:14.886430 | orchestrator | 2025-05-13 23:40:14 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:40:17.949861 | orchestrator | 2025-05-13 23:40:17 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:40:17.949944 | orchestrator | 2025-05-13 23:40:17 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:40:17.949952 | orchestrator | 2025-05-13 23:40:17 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:40:20.997025 | orchestrator | 2025-05-13 23:40:20 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:40:20.998882 | orchestrator | 2025-05-13 23:40:20 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:40:20.998917 | orchestrator | 2025-05-13 23:40:20 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:40:24.050519 | orchestrator | 2025-05-13 23:40:24 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:40:24.052748 | orchestrator | 2025-05-13 23:40:24 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:40:24.052790 | orchestrator | 2025-05-13 23:40:24 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:40:27.093627 | orchestrator | 2025-05-13 23:40:27 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:40:27.093778 | orchestrator | 2025-05-13 23:40:27 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:40:27.093801 | orchestrator | 2025-05-13 23:40:27 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:40:30.143778 | orchestrator | 2025-05-13 23:40:30 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:40:30.145159 | orchestrator | 2025-05-13 23:40:30 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:40:30.145204 | orchestrator | 2025-05-13 23:40:30 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:40:33.189224 | orchestrator | 2025-05-13 23:40:33 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:40:33.189880 | orchestrator | 2025-05-13 23:40:33 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:40:33.189979 | orchestrator | 2025-05-13 23:40:33 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:40:36.239587 | orchestrator | 2025-05-13 23:40:36 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:40:36.240910 | orchestrator | 2025-05-13 23:40:36 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:40:36.240954 | orchestrator | 2025-05-13 23:40:36 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:40:39.289397 | orchestrator | 2025-05-13 23:40:39 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:40:39.292833 | orchestrator | 2025-05-13 23:40:39 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:40:39.293082 | orchestrator | 2025-05-13 23:40:39 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:40:42.359081 | orchestrator | 2025-05-13 23:40:42 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:40:42.359186 | orchestrator | 2025-05-13 23:40:42 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:40:42.359200 | orchestrator | 2025-05-13 23:40:42 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:40:45.412989 | orchestrator | 2025-05-13 23:40:45 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:40:45.413088 | orchestrator | 2025-05-13 23:40:45 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:40:45.413098 | orchestrator | 2025-05-13 23:40:45 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:40:48.460917 | orchestrator | 2025-05-13 23:40:48 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:40:48.462863 | orchestrator | 2025-05-13 23:40:48 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:40:48.462970 | orchestrator | 2025-05-13 23:40:48 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:40:51.522433 | orchestrator | 2025-05-13 23:40:51 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:40:51.523399 | orchestrator | 2025-05-13 23:40:51 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:40:51.523436 | orchestrator | 2025-05-13 23:40:51 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:40:54.569136 | orchestrator | 2025-05-13 23:40:54 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:40:54.569220 | orchestrator | 2025-05-13 23:40:54 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:40:54.569231 | orchestrator | 2025-05-13 23:40:54 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:40:57.607221 | orchestrator | 2025-05-13 23:40:57 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:40:57.609790 | orchestrator | 2025-05-13 23:40:57 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:40:57.609868 | orchestrator | 2025-05-13 23:40:57 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:41:00.657339 | orchestrator | 2025-05-13 23:41:00 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:41:00.658880 | orchestrator | 2025-05-13 23:41:00 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:41:00.658932 | orchestrator | 2025-05-13 23:41:00 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:41:03.727694 | orchestrator | 2025-05-13 23:41:03 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:41:03.728487 | orchestrator | 2025-05-13 23:41:03 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:41:03.728558 | orchestrator | 2025-05-13 23:41:03 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:41:06.772193 | orchestrator | 2025-05-13 23:41:06 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:41:06.772332 | orchestrator | 2025-05-13 23:41:06 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:41:06.772346 | orchestrator | 2025-05-13 23:41:06 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:41:09.815581 | orchestrator | 2025-05-13 23:41:09 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:41:09.816139 | orchestrator | 2025-05-13 23:41:09 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:41:09.816264 | orchestrator | 2025-05-13 23:41:09 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:41:12.868158 | orchestrator | 2025-05-13 23:41:12 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:41:12.869907 | orchestrator | 2025-05-13 23:41:12 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:41:12.869954 | orchestrator | 2025-05-13 23:41:12 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:41:15.916899 | orchestrator | 2025-05-13 23:41:15 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:41:15.918693 | orchestrator | 2025-05-13 23:41:15 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:41:15.918785 | orchestrator | 2025-05-13 23:41:15 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:41:18.966925 | orchestrator | 2025-05-13 23:41:18 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:41:18.969057 | orchestrator | 2025-05-13 23:41:18 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:41:18.969113 | orchestrator | 2025-05-13 23:41:18 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:41:22.008971 | orchestrator | 2025-05-13 23:41:22 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:41:22.009320 | orchestrator | 2025-05-13 23:41:22 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:41:22.009391 | orchestrator | 2025-05-13 23:41:22 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:41:25.055159 | orchestrator | 2025-05-13 23:41:25 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:41:25.055814 | orchestrator | 2025-05-13 23:41:25 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:41:25.055846 | orchestrator | 2025-05-13 23:41:25 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:41:28.100324 | orchestrator | 2025-05-13 23:41:28 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:41:28.101139 | orchestrator | 2025-05-13 23:41:28 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:41:28.101265 | orchestrator | 2025-05-13 23:41:28 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:41:31.149561 | orchestrator | 2025-05-13 23:41:31 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:41:31.151755 | orchestrator | 2025-05-13 23:41:31 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:41:31.151807 | orchestrator | 2025-05-13 23:41:31 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:41:34.200310 | orchestrator | 2025-05-13 23:41:34 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state STARTED 2025-05-13 23:41:34.202111 | orchestrator | 2025-05-13 23:41:34 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:41:34.202172 | orchestrator | 2025-05-13 23:41:34 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:41:37.273074 | orchestrator | 2025-05-13 23:41:37 | INFO  | Task f311897f-0ee7-4695-88cb-19ce7dbe65ab is in state SUCCESS 2025-05-13 23:41:37.274096 | orchestrator | 2025-05-13 23:41:37.274125 | orchestrator | 2025-05-13 23:41:37.274130 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-13 23:41:37.274135 | orchestrator | 2025-05-13 23:41:37.274140 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-13 23:41:37.274144 | orchestrator | Tuesday 13 May 2025 23:35:01 +0000 (0:00:00.748) 0:00:00.748 *********** 2025-05-13 23:41:37.274148 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:41:37.274153 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:41:37.274157 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:41:37.274161 | orchestrator | 2025-05-13 23:41:37.274165 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-13 23:41:37.274169 | orchestrator | Tuesday 13 May 2025 23:35:02 +0000 (0:00:00.490) 0:00:01.239 *********** 2025-05-13 23:41:37.274174 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-05-13 23:41:37.274178 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-05-13 23:41:37.274181 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-05-13 23:41:37.274185 | orchestrator | 2025-05-13 23:41:37.274189 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-05-13 23:41:37.274192 | orchestrator | 2025-05-13 23:41:37.274196 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-05-13 23:41:37.274200 | orchestrator | Tuesday 13 May 2025 23:35:02 +0000 (0:00:00.698) 0:00:01.937 *********** 2025-05-13 23:41:37.274203 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:41:37.274207 | orchestrator | 2025-05-13 23:41:37.274211 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-05-13 23:41:37.274215 | orchestrator | Tuesday 13 May 2025 23:35:03 +0000 (0:00:00.944) 0:00:02.882 *********** 2025-05-13 23:41:37.274218 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:41:37.274222 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:41:37.274226 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:41:37.274229 | orchestrator | 2025-05-13 23:41:37.274248 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-05-13 23:41:37.274251 | orchestrator | Tuesday 13 May 2025 23:35:04 +0000 (0:00:00.880) 0:00:03.763 *********** 2025-05-13 23:41:37.274255 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:41:37.274259 | orchestrator | 2025-05-13 23:41:37.274263 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-05-13 23:41:37.274266 | orchestrator | Tuesday 13 May 2025 23:35:05 +0000 (0:00:01.003) 0:00:04.767 *********** 2025-05-13 23:41:37.274270 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:41:37.274274 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:41:37.274277 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:41:37.274281 | orchestrator | 2025-05-13 23:41:37.274284 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-05-13 23:41:37.274288 | orchestrator | Tuesday 13 May 2025 23:35:06 +0000 (0:00:00.758) 0:00:05.526 *********** 2025-05-13 23:41:37.274292 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-05-13 23:41:37.274296 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-05-13 23:41:37.274299 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-05-13 23:41:37.274303 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-05-13 23:41:37.274306 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-05-13 23:41:37.274310 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-05-13 23:41:37.274314 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-05-13 23:41:37.274318 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-05-13 23:41:37.274322 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-05-13 23:41:37.274325 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-05-13 23:41:37.274329 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-05-13 23:41:37.274332 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-05-13 23:41:37.274336 | orchestrator | 2025-05-13 23:41:37.274348 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-05-13 23:41:37.274352 | orchestrator | Tuesday 13 May 2025 23:35:09 +0000 (0:00:03.181) 0:00:08.707 *********** 2025-05-13 23:41:37.274356 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-05-13 23:41:37.274361 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-05-13 23:41:37.274368 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-05-13 23:41:37.274374 | orchestrator | 2025-05-13 23:41:37.274381 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-05-13 23:41:37.274387 | orchestrator | Tuesday 13 May 2025 23:35:10 +0000 (0:00:01.234) 0:00:09.942 *********** 2025-05-13 23:41:37.274393 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-05-13 23:41:37.274400 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-05-13 23:41:37.274406 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-05-13 23:41:37.274413 | orchestrator | 2025-05-13 23:41:37.274419 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-05-13 23:41:37.274426 | orchestrator | Tuesday 13 May 2025 23:35:12 +0000 (0:00:01.927) 0:00:11.869 *********** 2025-05-13 23:41:37.274462 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-05-13 23:41:37.274469 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.274485 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-05-13 23:41:37.274489 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.274499 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-05-13 23:41:37.274503 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.274507 | orchestrator | 2025-05-13 23:41:37.274511 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-05-13 23:41:37.274515 | orchestrator | Tuesday 13 May 2025 23:35:13 +0000 (0:00:01.063) 0:00:12.933 *********** 2025-05-13 23:41:37.274520 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-13 23:41:37.274528 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-13 23:41:37.274588 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-13 23:41:37.274594 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-13 23:41:37.274632 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-13 23:41:37.274644 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-13 23:41:37.274656 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-13 23:41:37.274663 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-13 23:41:37.274670 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-13 23:41:37.274676 | orchestrator | 2025-05-13 23:41:37.274682 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-05-13 23:41:37.274688 | orchestrator | Tuesday 13 May 2025 23:35:16 +0000 (0:00:03.079) 0:00:16.013 *********** 2025-05-13 23:41:37.274694 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:41:37.274724 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:41:37.274731 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:41:37.274738 | orchestrator | 2025-05-13 23:41:37.274744 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-05-13 23:41:37.274748 | orchestrator | Tuesday 13 May 2025 23:35:18 +0000 (0:00:01.127) 0:00:17.140 *********** 2025-05-13 23:41:37.274752 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-05-13 23:41:37.274757 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-05-13 23:41:37.274761 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-05-13 23:41:37.274765 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-05-13 23:41:37.274769 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-05-13 23:41:37.274773 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-05-13 23:41:37.274778 | orchestrator | 2025-05-13 23:41:37.274782 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-05-13 23:41:37.274786 | orchestrator | Tuesday 13 May 2025 23:35:20 +0000 (0:00:02.801) 0:00:19.941 *********** 2025-05-13 23:41:37.274790 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:41:37.274795 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:41:37.274799 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:41:37.274803 | orchestrator | 2025-05-13 23:41:37.274808 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-05-13 23:41:37.274812 | orchestrator | Tuesday 13 May 2025 23:35:22 +0000 (0:00:01.451) 0:00:21.393 *********** 2025-05-13 23:41:37.274816 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:41:37.274820 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:41:37.274824 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:41:37.274829 | orchestrator | 2025-05-13 23:41:37.274833 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-05-13 23:41:37.274841 | orchestrator | Tuesday 13 May 2025 23:35:23 +0000 (0:00:01.376) 0:00:22.769 *********** 2025-05-13 23:41:37.274849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-13 23:41:37.274860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-13 23:41:37.274865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-13 23:41:37.274870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-13 23:41:37.274874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-13 23:41:37.274879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-13 23:41:37.274884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__3dfbccd82259358478a5781aa9ebbf1173aa2287', '__omit_place_holder__3dfbccd82259358478a5781aa9ebbf1173aa2287'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-13 23:41:37.274896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__3dfbccd82259358478a5781aa9ebbf1173aa2287', '__omit_place_holder__3dfbccd82259358478a5781aa9ebbf1173aa2287'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-13 23:41:37.274900 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.274905 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.274916 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-13 23:41:37.274920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-13 23:41:37.274942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-13 23:41:37.274947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__3dfbccd82259358478a5781aa9ebbf1173aa2287', '__omit_place_holder__3dfbccd82259358478a5781aa9ebbf1173aa2287'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-13 23:41:37.274951 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.274955 | orchestrator | 2025-05-13 23:41:37.274959 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-05-13 23:41:37.274967 | orchestrator | Tuesday 13 May 2025 23:35:24 +0000 (0:00:00.730) 0:00:23.499 *********** 2025-05-13 23:41:37.274972 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-13 23:41:37.274978 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-13 23:41:37.274987 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-13 23:41:37.274991 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-13 23:41:37.274995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-13 23:41:37.274999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__3dfbccd82259358478a5781aa9ebbf1173aa2287', '__omit_place_holder__3dfbccd82259358478a5781aa9ebbf1173aa2287'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-13 23:41:37.275018 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-13 23:41:37.275034 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-13 23:41:37.275038 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__3dfbccd82259358478a5781aa9ebbf1173aa2287', '__omit_place_holder__3dfbccd82259358478a5781aa9ebbf1173aa2287'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-13 23:41:37.275045 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-13 23:41:37.275049 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-13 23:41:37.275053 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__3dfbccd82259358478a5781aa9ebbf1173aa2287', '__omit_place_holder__3dfbccd82259358478a5781aa9ebbf1173aa2287'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-13 23:41:37.275072 | orchestrator | 2025-05-13 23:41:37.275076 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-05-13 23:41:37.275080 | orchestrator | Tuesday 13 May 2025 23:35:29 +0000 (0:00:05.063) 0:00:28.563 *********** 2025-05-13 23:41:37.275084 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-13 23:41:37.275093 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-13 23:41:37.275100 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-13 23:41:37.275109 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-13 23:41:37.275129 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-13 23:41:37.275134 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-13 23:41:37.275138 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-13 23:41:37.275145 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-13 23:41:37.275149 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-13 23:41:37.275153 | orchestrator | 2025-05-13 23:41:37.275157 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-05-13 23:41:37.275177 | orchestrator | Tuesday 13 May 2025 23:35:33 +0000 (0:00:04.092) 0:00:32.655 *********** 2025-05-13 23:41:37.275182 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-05-13 23:41:37.275186 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-05-13 23:41:37.275190 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-05-13 23:41:37.275193 | orchestrator | 2025-05-13 23:41:37.275197 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-05-13 23:41:37.275201 | orchestrator | Tuesday 13 May 2025 23:35:35 +0000 (0:00:01.723) 0:00:34.379 *********** 2025-05-13 23:41:37.275205 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-05-13 23:41:37.275209 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-05-13 23:41:37.275212 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-05-13 23:41:37.275216 | orchestrator | 2025-05-13 23:41:37.275467 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-05-13 23:41:37.275474 | orchestrator | Tuesday 13 May 2025 23:35:41 +0000 (0:00:06.003) 0:00:40.383 *********** 2025-05-13 23:41:37.275478 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.275482 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.275486 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.275490 | orchestrator | 2025-05-13 23:41:37.275494 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-05-13 23:41:37.275497 | orchestrator | Tuesday 13 May 2025 23:35:41 +0000 (0:00:00.692) 0:00:41.075 *********** 2025-05-13 23:41:37.275501 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-05-13 23:41:37.275506 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-05-13 23:41:37.275509 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-05-13 23:41:37.275513 | orchestrator | 2025-05-13 23:41:37.275517 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-05-13 23:41:37.275525 | orchestrator | Tuesday 13 May 2025 23:35:45 +0000 (0:00:03.057) 0:00:44.133 *********** 2025-05-13 23:41:37.275529 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-05-13 23:41:37.275533 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-05-13 23:41:37.275537 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-05-13 23:41:37.275541 | orchestrator | 2025-05-13 23:41:37.275564 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-05-13 23:41:37.275568 | orchestrator | Tuesday 13 May 2025 23:35:47 +0000 (0:00:02.073) 0:00:46.207 *********** 2025-05-13 23:41:37.275572 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-05-13 23:41:37.275576 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-05-13 23:41:37.275579 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-05-13 23:41:37.275583 | orchestrator | 2025-05-13 23:41:37.275587 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-05-13 23:41:37.275591 | orchestrator | Tuesday 13 May 2025 23:35:49 +0000 (0:00:02.431) 0:00:48.639 *********** 2025-05-13 23:41:37.275594 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-05-13 23:41:37.275598 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-05-13 23:41:37.275602 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-05-13 23:41:37.275605 | orchestrator | 2025-05-13 23:41:37.275609 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-05-13 23:41:37.275613 | orchestrator | Tuesday 13 May 2025 23:35:51 +0000 (0:00:02.327) 0:00:50.966 *********** 2025-05-13 23:41:37.275616 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:41:37.275620 | orchestrator | 2025-05-13 23:41:37.275624 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-05-13 23:41:37.275628 | orchestrator | Tuesday 13 May 2025 23:35:53 +0000 (0:00:01.378) 0:00:52.345 *********** 2025-05-13 23:41:37.275632 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-13 23:41:37.275663 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-13 23:41:37.275770 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-13 23:41:37.275781 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-13 23:41:37.275786 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-13 23:41:37.275790 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-13 23:41:37.275794 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-13 23:41:37.275799 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-13 23:41:37.275806 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-13 23:41:37.275810 | orchestrator | 2025-05-13 23:41:37.275814 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-05-13 23:41:37.275818 | orchestrator | Tuesday 13 May 2025 23:35:56 +0000 (0:00:03.552) 0:00:55.897 *********** 2025-05-13 23:41:37.275826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-13 23:41:37.275834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-13 23:41:37.275838 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-13 23:41:37.275842 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.275846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-13 23:41:37.275850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-13 23:41:37.275856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-13 23:41:37.275860 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.275864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-13 23:41:37.275873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-13 23:41:37.275877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-13 23:41:37.275881 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.275885 | orchestrator | 2025-05-13 23:41:37.275889 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-05-13 23:41:37.275893 | orchestrator | Tuesday 13 May 2025 23:35:57 +0000 (0:00:00.769) 0:00:56.666 *********** 2025-05-13 23:41:37.275897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-13 23:41:37.275901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-13 23:41:37.275905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-13 23:41:37.275909 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.275915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-13 23:41:37.275925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-13 23:41:37.275930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-13 23:41:37.275936 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.275942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-13 23:41:37.275948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-13 23:41:37.275955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-13 23:41:37.275961 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.275967 | orchestrator | 2025-05-13 23:41:37.275972 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-05-13 23:41:37.275979 | orchestrator | Tuesday 13 May 2025 23:35:59 +0000 (0:00:02.075) 0:00:58.741 *********** 2025-05-13 23:41:37.275992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-13 23:41:37.276008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-13 23:41:37.276015 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-13 23:41:37.276022 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.276028 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-13 23:41:37.276034 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-13 23:41:37.276041 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-13 23:41:37.276045 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.276049 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-13 23:41:37.276074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-13 23:41:37.276083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-13 23:41:37.276087 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.276091 | orchestrator | 2025-05-13 23:41:37.276095 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-05-13 23:41:37.276100 | orchestrator | Tuesday 13 May 2025 23:36:00 +0000 (0:00:00.855) 0:00:59.597 *********** 2025-05-13 23:41:37.276104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-13 23:41:37.276108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-13 23:41:37.276112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-13 23:41:37.276117 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.276121 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-13 23:41:37.276129 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-13 23:41:37.276134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-13 23:41:37.276138 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.276146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-13 23:41:37.276182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-13 23:41:37.276187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-13 23:41:37.276207 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.276212 | orchestrator | 2025-05-13 23:41:37.276216 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-05-13 23:41:37.276220 | orchestrator | Tuesday 13 May 2025 23:36:01 +0000 (0:00:01.282) 0:01:00.880 *********** 2025-05-13 23:41:37.276225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-13 23:41:37.276249 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-13 23:41:37.276256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-13 23:41:37.276261 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.276268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-13 23:41:37.276272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-13 23:41:37.276277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-13 23:41:37.276281 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.276329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-13 23:41:37.276334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-13 23:41:37.276342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-13 23:41:37.276346 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.276350 | orchestrator | 2025-05-13 23:41:37.276357 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2025-05-13 23:41:37.276361 | orchestrator | Tuesday 13 May 2025 23:36:03 +0000 (0:00:01.985) 0:01:02.865 *********** 2025-05-13 23:41:37.276365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-13 23:41:37.276372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-13 23:41:37.276376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-13 23:41:37.276380 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.276384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-13 23:41:37.276391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-13 23:41:37.276395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-13 23:41:37.276399 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.276405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-13 23:41:37.276412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-13 23:41:37.276416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-13 23:41:37.276420 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.276424 | orchestrator | 2025-05-13 23:41:37.276427 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2025-05-13 23:41:37.276431 | orchestrator | Tuesday 13 May 2025 23:36:06 +0000 (0:00:02.328) 0:01:05.194 *********** 2025-05-13 23:41:37.276435 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-13 23:41:37.276442 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-13 23:41:37.276446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-13 23:41:37.276450 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.276454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-13 23:41:37.276460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-13 23:41:37.276469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-13 23:41:37.276473 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.276477 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-13 23:41:37.276481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-13 23:41:37.276488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-13 23:41:37.276492 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.276496 | orchestrator | 2025-05-13 23:41:37.276500 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2025-05-13 23:41:37.276503 | orchestrator | Tuesday 13 May 2025 23:36:07 +0000 (0:00:00.970) 0:01:06.164 *********** 2025-05-13 23:41:37.276507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-13 23:41:37.276515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-13 23:41:37.276519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-13 23:41:37.276523 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.276530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-13 23:41:37.276534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-13 23:41:37.276540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-13 23:41:37.276544 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.276548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-13 23:41:37.276552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-13 23:41:37.276559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-13 23:41:37.276563 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.276566 | orchestrator | 2025-05-13 23:41:37.276570 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-05-13 23:41:37.276574 | orchestrator | Tuesday 13 May 2025 23:36:08 +0000 (0:00:01.336) 0:01:07.500 *********** 2025-05-13 23:41:37.276578 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-05-13 23:41:37.276582 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-05-13 23:41:37.276588 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-05-13 23:41:37.276592 | orchestrator | 2025-05-13 23:41:37.276596 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-05-13 23:41:37.276599 | orchestrator | Tuesday 13 May 2025 23:36:10 +0000 (0:00:01.949) 0:01:09.450 *********** 2025-05-13 23:41:37.276603 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-05-13 23:41:37.276607 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-05-13 23:41:37.276614 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-05-13 23:41:37.276617 | orchestrator | 2025-05-13 23:41:37.276621 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-05-13 23:41:37.276625 | orchestrator | Tuesday 13 May 2025 23:36:12 +0000 (0:00:01.837) 0:01:11.287 *********** 2025-05-13 23:41:37.276628 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-05-13 23:41:37.276632 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-05-13 23:41:37.276636 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-05-13 23:41:37.276640 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-13 23:41:37.276644 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.276647 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-13 23:41:37.276651 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.276655 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-13 23:41:37.276659 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.276662 | orchestrator | 2025-05-13 23:41:37.276666 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-05-13 23:41:37.276670 | orchestrator | Tuesday 13 May 2025 23:36:14 +0000 (0:00:02.699) 0:01:13.987 *********** 2025-05-13 23:41:37.276674 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-13 23:41:37.276678 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-13 23:41:37.276684 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-13 23:41:37.276691 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-13 23:41:37.276757 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-13 23:41:37.276764 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-13 23:41:37.276768 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-13 23:41:37.276772 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-13 23:41:37.276776 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-13 23:41:37.276779 | orchestrator | 2025-05-13 23:41:37.276783 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-05-13 23:41:37.276787 | orchestrator | Tuesday 13 May 2025 23:36:18 +0000 (0:00:03.210) 0:01:17.198 *********** 2025-05-13 23:41:37.276791 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:41:37.276795 | orchestrator | 2025-05-13 23:41:37.276798 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-05-13 23:41:37.276828 | orchestrator | Tuesday 13 May 2025 23:36:19 +0000 (0:00:01.195) 0:01:18.393 *********** 2025-05-13 23:41:37.276834 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-05-13 23:41:37.276845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-13 23:41:37.276849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.276853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.276857 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-05-13 23:41:37.276861 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-13 23:41:37.276868 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-05-13 23:41:37.277183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.277204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-13 23:41:37.277210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.277214 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.277218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.277222 | orchestrator | 2025-05-13 23:41:37.277226 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-05-13 23:41:37.277230 | orchestrator | Tuesday 13 May 2025 23:36:24 +0000 (0:00:05.247) 0:01:23.641 *********** 2025-05-13 23:41:37.277240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-05-13 23:41:37.277256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-13 23:41:37.277260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.277264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.277268 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.277272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-05-13 23:41:37.277276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-13 23:41:37.277286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.277290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.277294 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.277301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-05-13 23:41:37.277305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-13 23:41:37.277309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.277313 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.277405 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.277419 | orchestrator | 2025-05-13 23:41:37.277426 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-05-13 23:41:37.277432 | orchestrator | Tuesday 13 May 2025 23:36:25 +0000 (0:00:01.422) 0:01:25.063 *********** 2025-05-13 23:41:37.277439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-05-13 23:41:37.277447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-05-13 23:41:37.277454 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.277467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-05-13 23:41:37.277474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-05-13 23:41:37.277480 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.277486 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-05-13 23:41:37.277492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-05-13 23:41:37.277499 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.277503 | orchestrator | 2025-05-13 23:41:37.277511 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-05-13 23:41:37.277515 | orchestrator | Tuesday 13 May 2025 23:36:27 +0000 (0:00:01.521) 0:01:26.585 *********** 2025-05-13 23:41:37.277519 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:41:37.277523 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:41:37.277526 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:41:37.277530 | orchestrator | 2025-05-13 23:41:37.277534 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-05-13 23:41:37.277538 | orchestrator | Tuesday 13 May 2025 23:36:29 +0000 (0:00:01.718) 0:01:28.303 *********** 2025-05-13 23:41:37.277541 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:41:37.277545 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:41:37.277549 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:41:37.277553 | orchestrator | 2025-05-13 23:41:37.277556 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-05-13 23:41:37.277560 | orchestrator | Tuesday 13 May 2025 23:36:31 +0000 (0:00:02.355) 0:01:30.659 *********** 2025-05-13 23:41:37.277564 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:41:37.277568 | orchestrator | 2025-05-13 23:41:37.277571 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-05-13 23:41:37.277575 | orchestrator | Tuesday 13 May 2025 23:36:32 +0000 (0:00:00.772) 0:01:31.431 *********** 2025-05-13 23:41:37.277580 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-13 23:41:37.277589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.277594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.277601 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-13 23:41:37.277607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.277611 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.277615 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-13 23:41:37.277622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.277629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.277633 | orchestrator | 2025-05-13 23:41:37.277637 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-05-13 23:41:37.277641 | orchestrator | Tuesday 13 May 2025 23:36:36 +0000 (0:00:03.888) 0:01:35.320 *********** 2025-05-13 23:41:37.277647 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-13 23:41:37.277651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.277655 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.277663 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.277667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-13 23:41:37.277671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.277678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.277682 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.277692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-13 23:41:37.277717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.277727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.277735 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.277739 | orchestrator | 2025-05-13 23:41:37.277743 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-05-13 23:41:37.277747 | orchestrator | Tuesday 13 May 2025 23:36:36 +0000 (0:00:00.646) 0:01:35.966 *********** 2025-05-13 23:41:37.277751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-13 23:41:37.277756 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-13 23:41:37.277759 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.277763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-13 23:41:37.277767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-13 23:41:37.277771 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.277778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-13 23:41:37.277782 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-13 23:41:37.277786 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.277805 | orchestrator | 2025-05-13 23:41:37.277820 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-05-13 23:41:37.277825 | orchestrator | Tuesday 13 May 2025 23:36:37 +0000 (0:00:00.717) 0:01:36.683 *********** 2025-05-13 23:41:37.277829 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:41:37.277834 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:41:37.277838 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:41:37.277842 | orchestrator | 2025-05-13 23:41:37.277846 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-05-13 23:41:37.277850 | orchestrator | Tuesday 13 May 2025 23:36:39 +0000 (0:00:01.602) 0:01:38.286 *********** 2025-05-13 23:41:37.277868 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:41:37.277873 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:41:37.277877 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:41:37.277882 | orchestrator | 2025-05-13 23:41:37.277903 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-05-13 23:41:37.277908 | orchestrator | Tuesday 13 May 2025 23:36:41 +0000 (0:00:02.122) 0:01:40.408 *********** 2025-05-13 23:41:37.277912 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.277920 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.277925 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.277929 | orchestrator | 2025-05-13 23:41:37.277933 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-05-13 23:41:37.277937 | orchestrator | Tuesday 13 May 2025 23:36:41 +0000 (0:00:00.301) 0:01:40.709 *********** 2025-05-13 23:41:37.277942 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:41:37.277946 | orchestrator | 2025-05-13 23:41:37.277950 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-05-13 23:41:37.277955 | orchestrator | Tuesday 13 May 2025 23:36:42 +0000 (0:00:00.642) 0:01:41.352 *********** 2025-05-13 23:41:37.277960 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-05-13 23:41:37.277965 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-05-13 23:41:37.277970 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-05-13 23:41:37.277974 | orchestrator | 2025-05-13 23:41:37.277981 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-05-13 23:41:37.277986 | orchestrator | Tuesday 13 May 2025 23:36:45 +0000 (0:00:03.115) 0:01:44.467 *********** 2025-05-13 23:41:37.277993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-05-13 23:41:37.278000 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.278005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-05-13 23:41:37.278009 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.278073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-05-13 23:41:37.278078 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.278082 | orchestrator | 2025-05-13 23:41:37.278086 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-05-13 23:41:37.278090 | orchestrator | Tuesday 13 May 2025 23:36:46 +0000 (0:00:01.599) 0:01:46.066 *********** 2025-05-13 23:41:37.278095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-13 23:41:37.278100 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-13 23:41:37.278105 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.278111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-13 23:41:37.278115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-13 23:41:37.278123 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.278130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-13 23:41:37.278134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-13 23:41:37.278138 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.278141 | orchestrator | 2025-05-13 23:41:37.278145 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-05-13 23:41:37.278149 | orchestrator | Tuesday 13 May 2025 23:36:48 +0000 (0:00:01.760) 0:01:47.827 *********** 2025-05-13 23:41:37.278152 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.278156 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.278160 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.278164 | orchestrator | 2025-05-13 23:41:37.278167 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-05-13 23:41:37.278171 | orchestrator | Tuesday 13 May 2025 23:36:49 +0000 (0:00:00.938) 0:01:48.765 *********** 2025-05-13 23:41:37.278175 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.278179 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.278182 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.278186 | orchestrator | 2025-05-13 23:41:37.278190 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-05-13 23:41:37.278193 | orchestrator | Tuesday 13 May 2025 23:36:50 +0000 (0:00:01.256) 0:01:50.021 *********** 2025-05-13 23:41:37.278197 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:41:37.278201 | orchestrator | 2025-05-13 23:41:37.278204 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-05-13 23:41:37.278208 | orchestrator | Tuesday 13 May 2025 23:36:51 +0000 (0:00:00.714) 0:01:50.736 *********** 2025-05-13 23:41:37.278212 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-13 23:41:37.278216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.278225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.278232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.278237 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-13 23:41:37.278241 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.278259 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-13 23:41:37.278268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.278275 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.278279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.278283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.278289 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.278302 | orchestrator | 2025-05-13 23:41:37.278307 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-05-13 23:41:37.278313 | orchestrator | Tuesday 13 May 2025 23:36:55 +0000 (0:00:03.752) 0:01:54.488 *********** 2025-05-13 23:41:37.278322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-13 23:41:37.278328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.278338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.278344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.278350 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.278356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-13 23:41:37.278368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.278377 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.278425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.278430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-13 23:41:37.278434 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.278438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.278446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.278453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.278457 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.278461 | orchestrator | 2025-05-13 23:41:37.278464 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-05-13 23:41:37.278468 | orchestrator | Tuesday 13 May 2025 23:36:56 +0000 (0:00:01.287) 0:01:55.775 *********** 2025-05-13 23:41:37.278472 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-13 23:41:37.278479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-13 23:41:37.278483 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.278487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-13 23:41:37.278491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-13 23:41:37.278494 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.278498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-13 23:41:37.278502 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-13 23:41:37.278506 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.278509 | orchestrator | 2025-05-13 23:41:37.278513 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-05-13 23:41:37.278517 | orchestrator | Tuesday 13 May 2025 23:36:57 +0000 (0:00:01.202) 0:01:56.978 *********** 2025-05-13 23:41:37.278521 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:41:37.278525 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:41:37.278528 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:41:37.278536 | orchestrator | 2025-05-13 23:41:37.278540 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-05-13 23:41:37.278559 | orchestrator | Tuesday 13 May 2025 23:36:59 +0000 (0:00:01.440) 0:01:58.419 *********** 2025-05-13 23:41:37.278566 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:41:37.278600 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:41:37.278608 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:41:37.278614 | orchestrator | 2025-05-13 23:41:37.278621 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-05-13 23:41:37.278627 | orchestrator | Tuesday 13 May 2025 23:37:02 +0000 (0:00:02.825) 0:02:01.244 *********** 2025-05-13 23:41:37.278634 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.278641 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.278648 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.278680 | orchestrator | 2025-05-13 23:41:37.278684 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-05-13 23:41:37.278688 | orchestrator | Tuesday 13 May 2025 23:37:02 +0000 (0:00:00.803) 0:02:02.048 *********** 2025-05-13 23:41:37.278691 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.278710 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.278714 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.278718 | orchestrator | 2025-05-13 23:41:37.278722 | orchestrator | TASK [include_role : designate] ************************************************ 2025-05-13 23:41:37.278725 | orchestrator | Tuesday 13 May 2025 23:37:03 +0000 (0:00:00.393) 0:02:02.441 *********** 2025-05-13 23:41:37.278729 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:41:37.278733 | orchestrator | 2025-05-13 23:41:37.278737 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-05-13 23:41:37.278740 | orchestrator | Tuesday 13 May 2025 23:37:04 +0000 (0:00:00.804) 0:02:03.245 *********** 2025-05-13 23:41:37.278749 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-13 23:41:37.278758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-13 23:41:37.278762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.278772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.278776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.278780 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.278784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.278791 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-13 23:41:37.279113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-13 23:41:37.279133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.279137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.279141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.279145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.279153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.279191 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-13 23:41:37.279196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-13 23:41:37.279207 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.279211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.279215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.279219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.279225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.279229 | orchestrator | 2025-05-13 23:41:37.279233 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-05-13 23:41:37.279237 | orchestrator | Tuesday 13 May 2025 23:37:08 +0000 (0:00:04.321) 0:02:07.566 *********** 2025-05-13 23:41:37.279244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-13 23:41:37.279252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-13 23:41:37.279256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.279260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.279264 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.279272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-13 23:41:37.279281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.279286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-13 23:41:37.279289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.279294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.279297 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.279302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.279308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.279314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-13 23:41:37.279322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.279326 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-13 23:41:37.279330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.279334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.279338 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.279344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.279348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.279384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.279389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.279393 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.279397 | orchestrator | 2025-05-13 23:41:37.279414 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-05-13 23:41:37.279418 | orchestrator | Tuesday 13 May 2025 23:37:09 +0000 (0:00:00.819) 0:02:08.386 *********** 2025-05-13 23:41:37.279423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-05-13 23:41:37.279427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-05-13 23:41:37.279432 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.279435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-05-13 23:41:37.279439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-05-13 23:41:37.279443 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.279459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-05-13 23:41:37.279463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-05-13 23:41:37.279466 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.279470 | orchestrator | 2025-05-13 23:41:37.279474 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-05-13 23:41:37.279477 | orchestrator | Tuesday 13 May 2025 23:37:10 +0000 (0:00:00.965) 0:02:09.352 *********** 2025-05-13 23:41:37.279496 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:41:37.279510 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:41:37.279526 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:41:37.279535 | orchestrator | 2025-05-13 23:41:37.279542 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-05-13 23:41:37.279548 | orchestrator | Tuesday 13 May 2025 23:37:11 +0000 (0:00:01.534) 0:02:10.886 *********** 2025-05-13 23:41:37.279555 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:41:37.279561 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:41:37.279568 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:41:37.279574 | orchestrator | 2025-05-13 23:41:37.279580 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-05-13 23:41:37.279589 | orchestrator | Tuesday 13 May 2025 23:37:13 +0000 (0:00:02.014) 0:02:12.901 *********** 2025-05-13 23:41:37.279596 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.279602 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.279608 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.279614 | orchestrator | 2025-05-13 23:41:37.279621 | orchestrator | TASK [include_role : glance] *************************************************** 2025-05-13 23:41:37.279628 | orchestrator | Tuesday 13 May 2025 23:37:14 +0000 (0:00:00.332) 0:02:13.233 *********** 2025-05-13 23:41:37.279634 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:41:37.279640 | orchestrator | 2025-05-13 23:41:37.279646 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-05-13 23:41:37.279653 | orchestrator | Tuesday 13 May 2025 23:37:14 +0000 (0:00:00.780) 0:02:14.014 *********** 2025-05-13 23:41:37.279666 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-13 23:41:37.279675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-13 23:41:37.279688 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-13 23:41:37.279693 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-13 23:41:37.279741 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-13 23:41:37.279747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-13 23:41:37.279758 | orchestrator | 2025-05-13 23:41:37.279763 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-05-13 23:41:37.279767 | orchestrator | Tuesday 13 May 2025 23:37:19 +0000 (0:00:04.088) 0:02:18.102 *********** 2025-05-13 23:41:37.279776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-13 23:41:37.279782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-13 23:41:37.279791 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.279854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-13 23:41:37.279869 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-13 23:41:37.279876 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.279886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-13 23:41:37.279904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-13 23:41:37.279911 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.279917 | orchestrator | 2025-05-13 23:41:37.279925 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-05-13 23:41:37.279929 | orchestrator | Tuesday 13 May 2025 23:37:21 +0000 (0:00:02.802) 0:02:20.905 *********** 2025-05-13 23:41:37.279934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-13 23:41:37.279967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-13 23:41:37.279972 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.279976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-13 23:41:37.279984 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-13 23:41:37.279988 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.279992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-13 23:41:37.280002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-13 23:41:37.280006 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.280010 | orchestrator | 2025-05-13 23:41:37.280015 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-05-13 23:41:37.280019 | orchestrator | Tuesday 13 May 2025 23:37:25 +0000 (0:00:03.287) 0:02:24.193 *********** 2025-05-13 23:41:37.280023 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:41:37.280027 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:41:37.280032 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:41:37.280036 | orchestrator | 2025-05-13 23:41:37.280040 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-05-13 23:41:37.280061 | orchestrator | Tuesday 13 May 2025 23:37:26 +0000 (0:00:01.578) 0:02:25.772 *********** 2025-05-13 23:41:37.280066 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:41:37.280070 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:41:37.280075 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:41:37.280079 | orchestrator | 2025-05-13 23:41:37.280083 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-05-13 23:41:37.280096 | orchestrator | Tuesday 13 May 2025 23:37:28 +0000 (0:00:02.017) 0:02:27.789 *********** 2025-05-13 23:41:37.280100 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.280104 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.280108 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.280111 | orchestrator | 2025-05-13 23:41:37.280115 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-05-13 23:41:37.280119 | orchestrator | Tuesday 13 May 2025 23:37:29 +0000 (0:00:00.313) 0:02:28.103 *********** 2025-05-13 23:41:37.280163 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:41:37.280167 | orchestrator | 2025-05-13 23:41:37.280171 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-05-13 23:41:37.280175 | orchestrator | Tuesday 13 May 2025 23:37:29 +0000 (0:00:00.913) 0:02:29.016 *********** 2025-05-13 23:41:37.280179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-13 23:41:37.280184 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-13 23:41:37.280191 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-13 23:41:37.280195 | orchestrator | 2025-05-13 23:41:37.280198 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-05-13 23:41:37.280202 | orchestrator | Tuesday 13 May 2025 23:37:33 +0000 (0:00:03.747) 0:02:32.764 *********** 2025-05-13 23:41:37.280210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-13 23:41:37.280214 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.280218 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-13 23:41:37.280226 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.280230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-13 23:41:37.280234 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.280237 | orchestrator | 2025-05-13 23:41:37.280241 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-05-13 23:41:37.280245 | orchestrator | Tuesday 13 May 2025 23:37:34 +0000 (0:00:00.400) 0:02:33.164 *********** 2025-05-13 23:41:37.280248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-05-13 23:41:37.280253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-05-13 23:41:37.280257 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.280261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-05-13 23:41:37.280265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-05-13 23:41:37.280268 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.280272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-05-13 23:41:37.280278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-05-13 23:41:37.280282 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.280286 | orchestrator | 2025-05-13 23:41:37.280290 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-05-13 23:41:37.280293 | orchestrator | Tuesday 13 May 2025 23:37:34 +0000 (0:00:00.641) 0:02:33.806 *********** 2025-05-13 23:41:37.280297 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:41:37.280301 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:41:37.280304 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:41:37.280308 | orchestrator | 2025-05-13 23:41:37.280312 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-05-13 23:41:37.280315 | orchestrator | Tuesday 13 May 2025 23:37:36 +0000 (0:00:01.789) 0:02:35.595 *********** 2025-05-13 23:41:37.280319 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:41:37.280323 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:41:37.280331 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:41:37.280335 | orchestrator | 2025-05-13 23:41:37.280339 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-05-13 23:41:37.280342 | orchestrator | Tuesday 13 May 2025 23:37:38 +0000 (0:00:02.141) 0:02:37.737 *********** 2025-05-13 23:41:37.280346 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.280350 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.280356 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.280359 | orchestrator | 2025-05-13 23:41:37.280363 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-05-13 23:41:37.280367 | orchestrator | Tuesday 13 May 2025 23:37:38 +0000 (0:00:00.288) 0:02:38.025 *********** 2025-05-13 23:41:37.280370 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:41:37.280374 | orchestrator | 2025-05-13 23:41:37.280378 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-05-13 23:41:37.280381 | orchestrator | Tuesday 13 May 2025 23:37:40 +0000 (0:00:01.230) 0:02:39.256 *********** 2025-05-13 23:41:37.280386 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-13 23:41:37.280395 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-13 23:41:37.280421 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-13 23:41:37.280426 | orchestrator | 2025-05-13 23:41:37.280430 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-05-13 23:41:37.280433 | orchestrator | Tuesday 13 May 2025 23:37:43 +0000 (0:00:03.795) 0:02:43.051 *********** 2025-05-13 23:41:37.281160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-13 23:41:37.281198 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.281226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-13 23:41:37.281237 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.281252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-13 23:41:37.281260 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.281266 | orchestrator | 2025-05-13 23:41:37.281273 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-05-13 23:41:37.281280 | orchestrator | Tuesday 13 May 2025 23:37:44 +0000 (0:00:00.845) 0:02:43.897 *********** 2025-05-13 23:41:37.281285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-13 23:41:37.281290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-13 23:41:37.281294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-13 23:41:37.281299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-13 23:41:37.281303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-13 23:41:37.281314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-13 23:41:37.281319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-13 23:41:37.281323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-05-13 23:41:37.281327 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.281334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-13 23:41:37.281338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-05-13 23:41:37.281342 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.281345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-13 23:41:37.281349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-13 23:41:37.281353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-13 23:41:37.281358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-13 23:41:37.281361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-05-13 23:41:37.281365 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.281369 | orchestrator | 2025-05-13 23:41:37.281373 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-05-13 23:41:37.281376 | orchestrator | Tuesday 13 May 2025 23:37:46 +0000 (0:00:01.506) 0:02:45.403 *********** 2025-05-13 23:41:37.281380 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:41:37.281384 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:41:37.281391 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:41:37.281394 | orchestrator | 2025-05-13 23:41:37.281398 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-05-13 23:41:37.281402 | orchestrator | Tuesday 13 May 2025 23:37:47 +0000 (0:00:01.406) 0:02:46.809 *********** 2025-05-13 23:41:37.281405 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:41:37.281409 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:41:37.281451 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:41:37.281456 | orchestrator | 2025-05-13 23:41:37.281460 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-05-13 23:41:37.281463 | orchestrator | Tuesday 13 May 2025 23:37:50 +0000 (0:00:02.357) 0:02:49.167 *********** 2025-05-13 23:41:37.281467 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.281471 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.281474 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.281478 | orchestrator | 2025-05-13 23:41:37.281497 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-05-13 23:41:37.281501 | orchestrator | Tuesday 13 May 2025 23:37:50 +0000 (0:00:00.542) 0:02:49.710 *********** 2025-05-13 23:41:37.281505 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.281509 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.281512 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.281516 | orchestrator | 2025-05-13 23:41:37.281522 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-05-13 23:41:37.281526 | orchestrator | Tuesday 13 May 2025 23:37:50 +0000 (0:00:00.332) 0:02:50.042 *********** 2025-05-13 23:41:37.281530 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:41:37.281533 | orchestrator | 2025-05-13 23:41:37.281537 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-05-13 23:41:37.281541 | orchestrator | Tuesday 13 May 2025 23:37:52 +0000 (0:00:01.248) 0:02:51.291 *********** 2025-05-13 23:41:37.281548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-13 23:41:37.281553 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-13 23:41:37.281558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-13 23:41:37.281565 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-13 23:41:37.281572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-13 23:41:37.281576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-13 23:41:37.281583 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-13 23:41:37.281587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-13 23:41:37.281615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-13 23:41:37.281620 | orchestrator | 2025-05-13 23:41:37.281636 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-05-13 23:41:37.281640 | orchestrator | Tuesday 13 May 2025 23:37:56 +0000 (0:00:03.910) 0:02:55.201 *********** 2025-05-13 23:41:37.281646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-05-13 23:41:37.281650 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-13 23:41:37.281658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-13 23:41:37.281662 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.281666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-05-13 23:41:37.281673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-13 23:41:37.281678 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-13 23:41:37.281682 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.281687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-05-13 23:41:37.281694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-13 23:41:37.281716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-13 23:41:37.281726 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.281730 | orchestrator | 2025-05-13 23:41:37.281734 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-05-13 23:41:37.281738 | orchestrator | Tuesday 13 May 2025 23:37:56 +0000 (0:00:00.824) 0:02:56.026 *********** 2025-05-13 23:41:37.281741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-05-13 23:41:37.281746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-05-13 23:41:37.281750 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.281754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-05-13 23:41:37.281758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-05-13 23:41:37.281762 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.281766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-05-13 23:41:37.281771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-05-13 23:41:37.281775 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.281779 | orchestrator | 2025-05-13 23:41:37.281783 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-05-13 23:41:37.281787 | orchestrator | Tuesday 13 May 2025 23:37:58 +0000 (0:00:01.440) 0:02:57.466 *********** 2025-05-13 23:41:37.281792 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:41:37.281796 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:41:37.281800 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:41:37.281804 | orchestrator | 2025-05-13 23:41:37.281809 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-05-13 23:41:37.281813 | orchestrator | Tuesday 13 May 2025 23:37:59 +0000 (0:00:01.284) 0:02:58.751 *********** 2025-05-13 23:41:37.281819 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:41:37.281837 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:41:37.281842 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:41:37.281846 | orchestrator | 2025-05-13 23:41:37.281850 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-05-13 23:41:37.281854 | orchestrator | Tuesday 13 May 2025 23:38:01 +0000 (0:00:01.958) 0:03:00.709 *********** 2025-05-13 23:41:37.281859 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.281863 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.281867 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.281871 | orchestrator | 2025-05-13 23:41:37.281876 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-05-13 23:41:37.281880 | orchestrator | Tuesday 13 May 2025 23:38:01 +0000 (0:00:00.295) 0:03:01.005 *********** 2025-05-13 23:41:37.281884 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:41:37.281888 | orchestrator | 2025-05-13 23:41:37.281892 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-05-13 23:41:37.281900 | orchestrator | Tuesday 13 May 2025 23:38:03 +0000 (0:00:01.190) 0:03:02.196 *********** 2025-05-13 23:41:37.281907 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-13 23:41:37.281913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.281918 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-13 23:41:37.281923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.281930 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-13 23:41:37.281940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.281944 | orchestrator | 2025-05-13 23:41:37.281949 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-05-13 23:41:37.281953 | orchestrator | Tuesday 13 May 2025 23:38:06 +0000 (0:00:03.699) 0:03:05.895 *********** 2025-05-13 23:41:37.281958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-13 23:41:37.281962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.281969 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-13 23:41:37.281973 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.281986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.281990 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.281995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-13 23:41:37.282004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.282009 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.282036 | orchestrator | 2025-05-13 23:41:37.282041 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-05-13 23:41:37.282045 | orchestrator | Tuesday 13 May 2025 23:38:07 +0000 (0:00:00.639) 0:03:06.535 *********** 2025-05-13 23:41:37.282049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-05-13 23:41:37.282054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-05-13 23:41:37.282059 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.282063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-05-13 23:41:37.282081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-05-13 23:41:37.282085 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.282090 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-05-13 23:41:37.282100 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-05-13 23:41:37.282104 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.282108 | orchestrator | 2025-05-13 23:41:37.282113 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-05-13 23:41:37.282117 | orchestrator | Tuesday 13 May 2025 23:38:08 +0000 (0:00:01.260) 0:03:07.795 *********** 2025-05-13 23:41:37.282121 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:41:37.282126 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:41:37.282130 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:41:37.282134 | orchestrator | 2025-05-13 23:41:37.282138 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-05-13 23:41:37.282143 | orchestrator | Tuesday 13 May 2025 23:38:09 +0000 (0:00:01.260) 0:03:09.056 *********** 2025-05-13 23:41:37.282147 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:41:37.282150 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:41:37.282154 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:41:37.282158 | orchestrator | 2025-05-13 23:41:37.282162 | orchestrator | TASK [include_role : manila] *************************************************** 2025-05-13 23:41:37.282166 | orchestrator | Tuesday 13 May 2025 23:38:12 +0000 (0:00:02.067) 0:03:11.124 *********** 2025-05-13 23:41:37.282172 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:41:37.282190 | orchestrator | 2025-05-13 23:41:37.282194 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-05-13 23:41:37.282198 | orchestrator | Tuesday 13 May 2025 23:38:13 +0000 (0:00:01.253) 0:03:12.377 *********** 2025-05-13 23:41:37.282202 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-05-13 23:41:37.282206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.282210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.282215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.282238 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-05-13 23:41:37.282251 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.282255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.282259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.282263 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-05-13 23:41:37.282270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.282307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.282315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.282322 | orchestrator | 2025-05-13 23:41:37.282328 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-05-13 23:41:37.282334 | orchestrator | Tuesday 13 May 2025 23:38:17 +0000 (0:00:03.706) 0:03:16.083 *********** 2025-05-13 23:41:37.282340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-05-13 23:41:37.282346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.282352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.282362 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.282368 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.282377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-05-13 23:41:37.282387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.282394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-05-13 23:41:37.282400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.282413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.282420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.282425 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.282435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.282442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.282446 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.282450 | orchestrator | 2025-05-13 23:41:37.282454 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-05-13 23:41:37.282457 | orchestrator | Tuesday 13 May 2025 23:38:17 +0000 (0:00:00.644) 0:03:16.728 *********** 2025-05-13 23:41:37.282461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-05-13 23:41:37.282465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-05-13 23:41:37.282469 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.282473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-05-13 23:41:37.282476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-05-13 23:41:37.282480 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.282484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-05-13 23:41:37.282490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-05-13 23:41:37.282494 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.282498 | orchestrator | 2025-05-13 23:41:37.282502 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-05-13 23:41:37.282505 | orchestrator | Tuesday 13 May 2025 23:38:19 +0000 (0:00:01.586) 0:03:18.315 *********** 2025-05-13 23:41:37.282509 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:41:37.282513 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:41:37.282516 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:41:37.282520 | orchestrator | 2025-05-13 23:41:37.282524 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-05-13 23:41:37.282527 | orchestrator | Tuesday 13 May 2025 23:38:20 +0000 (0:00:01.495) 0:03:19.811 *********** 2025-05-13 23:41:37.282531 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:41:37.282534 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:41:37.282538 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:41:37.282542 | orchestrator | 2025-05-13 23:41:37.282546 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-05-13 23:41:37.282549 | orchestrator | Tuesday 13 May 2025 23:38:22 +0000 (0:00:02.141) 0:03:21.953 *********** 2025-05-13 23:41:37.282553 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:41:37.282556 | orchestrator | 2025-05-13 23:41:37.282560 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-05-13 23:41:37.282564 | orchestrator | Tuesday 13 May 2025 23:38:24 +0000 (0:00:01.374) 0:03:23.327 *********** 2025-05-13 23:41:37.282568 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-13 23:41:37.282571 | orchestrator | 2025-05-13 23:41:37.282575 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-05-13 23:41:37.282579 | orchestrator | Tuesday 13 May 2025 23:38:26 +0000 (0:00:02.711) 0:03:26.038 *********** 2025-05-13 23:41:37.282590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-13 23:41:37.282601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-13 23:41:37.282608 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.282651 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-13 23:41:37.282662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-13 23:41:37.282668 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.283478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-13 23:41:37.283533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-13 23:41:37.283542 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.283549 | orchestrator | 2025-05-13 23:41:37.283556 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-05-13 23:41:37.283561 | orchestrator | Tuesday 13 May 2025 23:38:29 +0000 (0:00:02.073) 0:03:28.112 *********** 2025-05-13 23:41:37.283569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-13 23:41:37.283579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-13 23:41:37.283587 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.283592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-13 23:41:37.283596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-13 23:41:37.283600 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.283609 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-13 23:41:37.283626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-13 23:41:37.283630 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.283634 | orchestrator | 2025-05-13 23:41:37.283637 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-05-13 23:41:37.283641 | orchestrator | Tuesday 13 May 2025 23:38:31 +0000 (0:00:02.592) 0:03:30.705 *********** 2025-05-13 23:41:37.283645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-13 23:41:37.283650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-13 23:41:37.283654 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.283749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-13 23:41:37.283761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-13 23:41:37.283772 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.283784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-13 23:41:37.283791 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-13 23:41:37.283798 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.283804 | orchestrator | 2025-05-13 23:41:37.283811 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-05-13 23:41:37.283817 | orchestrator | Tuesday 13 May 2025 23:38:34 +0000 (0:00:02.930) 0:03:33.635 *********** 2025-05-13 23:41:37.283824 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:41:37.283829 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:41:37.283832 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:41:37.283836 | orchestrator | 2025-05-13 23:41:37.283840 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-05-13 23:41:37.283843 | orchestrator | Tuesday 13 May 2025 23:38:36 +0000 (0:00:01.758) 0:03:35.393 *********** 2025-05-13 23:41:37.283847 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.283851 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.283855 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.283858 | orchestrator | 2025-05-13 23:41:37.283862 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-05-13 23:41:37.283866 | orchestrator | Tuesday 13 May 2025 23:38:37 +0000 (0:00:01.308) 0:03:36.702 *********** 2025-05-13 23:41:37.283869 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.283873 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.283877 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.283880 | orchestrator | 2025-05-13 23:41:37.283884 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-05-13 23:41:37.283887 | orchestrator | Tuesday 13 May 2025 23:38:37 +0000 (0:00:00.266) 0:03:36.969 *********** 2025-05-13 23:41:37.283891 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:41:37.283895 | orchestrator | 2025-05-13 23:41:37.283898 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-05-13 23:41:37.283902 | orchestrator | Tuesday 13 May 2025 23:38:39 +0000 (0:00:01.163) 0:03:38.132 *********** 2025-05-13 23:41:37.283906 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-05-13 23:41:37.283917 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-05-13 23:41:37.283924 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-05-13 23:41:37.283928 | orchestrator | 2025-05-13 23:41:37.283932 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-05-13 23:41:37.283936 | orchestrator | Tuesday 13 May 2025 23:38:40 +0000 (0:00:01.343) 0:03:39.476 *********** 2025-05-13 23:41:37.283940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-05-13 23:41:37.283944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-05-13 23:41:37.283947 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.283951 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.283955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-05-13 23:41:37.283962 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.283966 | orchestrator | 2025-05-13 23:41:37.283969 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-05-13 23:41:37.283973 | orchestrator | Tuesday 13 May 2025 23:38:40 +0000 (0:00:00.399) 0:03:39.875 *********** 2025-05-13 23:41:37.283981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-05-13 23:41:37.283986 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.283990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-05-13 23:41:37.283993 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.283999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-05-13 23:41:37.284003 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.284007 | orchestrator | 2025-05-13 23:41:37.284011 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-05-13 23:41:37.284014 | orchestrator | Tuesday 13 May 2025 23:38:41 +0000 (0:00:00.893) 0:03:40.769 *********** 2025-05-13 23:41:37.284018 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.284022 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.284025 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.284029 | orchestrator | 2025-05-13 23:41:37.284033 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-05-13 23:41:37.284036 | orchestrator | Tuesday 13 May 2025 23:38:42 +0000 (0:00:00.437) 0:03:41.207 *********** 2025-05-13 23:41:37.284040 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.284044 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.284047 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.284051 | orchestrator | 2025-05-13 23:41:37.284055 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-05-13 23:41:37.284058 | orchestrator | Tuesday 13 May 2025 23:38:43 +0000 (0:00:01.285) 0:03:42.492 *********** 2025-05-13 23:41:37.284062 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.284066 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.284069 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.284073 | orchestrator | 2025-05-13 23:41:37.284077 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-05-13 23:41:37.284081 | orchestrator | Tuesday 13 May 2025 23:38:43 +0000 (0:00:00.372) 0:03:42.865 *********** 2025-05-13 23:41:37.284085 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:41:37.284089 | orchestrator | 2025-05-13 23:41:37.284094 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-05-13 23:41:37.284098 | orchestrator | Tuesday 13 May 2025 23:38:45 +0000 (0:00:01.475) 0:03:44.340 *********** 2025-05-13 23:41:37.284103 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-13 23:41:37.284111 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-13 23:41:37.284118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.284126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.284131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.284136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.284143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.284149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-13 23:41:37.284156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.284161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.284166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-13 23:41:37.284173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-13 23:41:37.284178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-13 23:41:37.284183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.284189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.284196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-13 23:41:37.284201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-13 23:41:37.284205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-13 23:41:37.284212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.284216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.284221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-13 23:41:37.284226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-13 23:41:37.284232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-13 23:41:37.284237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.284255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.284261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-13 23:41:37.284268 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-13 23:41:37.284275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-13 23:41:37.284280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-13 23:41:37.284284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.284293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.284297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-13 23:41:37.284304 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.284308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.284315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.284323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-13 23:41:37.284328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-13 23:41:37.284332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-13 23:41:37.284339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.284347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.284352 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-13 23:41:37.284362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-13 23:41:37.284366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.284371 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-13 23:41:37.284377 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.284382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-13 23:41:37.284389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-13 23:41:37.284398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.284403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-13 23:41:37.284407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-13 23:41:37.284414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.284418 | orchestrator | 2025-05-13 23:41:37.284422 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-05-13 23:41:37.284427 | orchestrator | Tuesday 13 May 2025 23:38:49 +0000 (0:00:04.220) 0:03:48.560 *********** 2025-05-13 23:41:37.284434 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-13 23:41:37.284442 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.284447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.284451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.284457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-13 23:41:37.284461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.284467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-13 23:41:37.284475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-13 23:41:37.284479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.284483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-13 23:41:37.284487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.284494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-13 23:41:37.284498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-13 23:41:37.284679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.284714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-13 23:41:37.284723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.284730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-13 23:41:37.284742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.284762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.284768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-13 23:41:37.284775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-13 23:41:37.284781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.284787 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.284797 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.284804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-13 23:41:37.284820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-13 23:41:37.284827 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.284833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-13 23:41:37.284840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.284847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-13 23:41:37.284853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-13 23:41:37.284862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.284876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-13 23:41:37.284880 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-13 23:41:37.284884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.284888 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.284892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-13 23:41:37.284900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.284912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.284916 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.284920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-13 23:41:37.284924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.284928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-13 23:41:37.284938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-13 23:41:37.284945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.284949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-13 23:41:37.284953 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.284957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-13 23:41:37.284961 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-13 23:41:37.284965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.284976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-13 23:41:37.284980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-13 23:41:37.284984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.285003 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.285007 | orchestrator | 2025-05-13 23:41:37.285011 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-05-13 23:41:37.285015 | orchestrator | Tuesday 13 May 2025 23:38:51 +0000 (0:00:02.138) 0:03:50.699 *********** 2025-05-13 23:41:37.285019 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-05-13 23:41:37.285023 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-05-13 23:41:37.285027 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.285031 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-05-13 23:41:37.285035 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-05-13 23:41:37.285042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-05-13 23:41:37.285046 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.285050 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-05-13 23:41:37.285053 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.285057 | orchestrator | 2025-05-13 23:41:37.285061 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-05-13 23:41:37.285064 | orchestrator | Tuesday 13 May 2025 23:38:53 +0000 (0:00:02.323) 0:03:53.023 *********** 2025-05-13 23:41:37.285068 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:41:37.285072 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:41:37.285075 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:41:37.285079 | orchestrator | 2025-05-13 23:41:37.285085 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-05-13 23:41:37.285089 | orchestrator | Tuesday 13 May 2025 23:38:55 +0000 (0:00:01.273) 0:03:54.296 *********** 2025-05-13 23:41:37.285093 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:41:37.285097 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:41:37.285100 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:41:37.285104 | orchestrator | 2025-05-13 23:41:37.285107 | orchestrator | TASK [include_role : placement] ************************************************ 2025-05-13 23:41:37.285111 | orchestrator | Tuesday 13 May 2025 23:38:57 +0000 (0:00:02.016) 0:03:56.312 *********** 2025-05-13 23:41:37.285115 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:41:37.285118 | orchestrator | 2025-05-13 23:41:37.285122 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-05-13 23:41:37.285126 | orchestrator | Tuesday 13 May 2025 23:38:58 +0000 (0:00:01.281) 0:03:57.594 *********** 2025-05-13 23:41:37.285132 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-13 23:41:37.285136 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-13 23:41:37.285144 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-13 23:41:37.285148 | orchestrator | 2025-05-13 23:41:37.285152 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-05-13 23:41:37.285156 | orchestrator | Tuesday 13 May 2025 23:39:01 +0000 (0:00:03.082) 0:04:00.677 *********** 2025-05-13 23:41:37.285162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-13 23:41:37.285166 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.285172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-13 23:41:37.285177 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.285181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-13 23:41:37.285188 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.285192 | orchestrator | 2025-05-13 23:41:37.285195 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-05-13 23:41:37.285199 | orchestrator | Tuesday 13 May 2025 23:39:02 +0000 (0:00:00.470) 0:04:01.147 *********** 2025-05-13 23:41:37.285203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-13 23:41:37.285207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-13 23:41:37.285211 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.285214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-13 23:41:37.285218 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-13 23:41:37.285222 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.285226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-13 23:41:37.285230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-13 23:41:37.285233 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.285237 | orchestrator | 2025-05-13 23:41:37.285241 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-05-13 23:41:37.285244 | orchestrator | Tuesday 13 May 2025 23:39:03 +0000 (0:00:01.050) 0:04:02.198 *********** 2025-05-13 23:41:37.285248 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:41:37.285254 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:41:37.285258 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:41:37.285262 | orchestrator | 2025-05-13 23:41:37.285266 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-05-13 23:41:37.285270 | orchestrator | Tuesday 13 May 2025 23:39:04 +0000 (0:00:01.376) 0:04:03.574 *********** 2025-05-13 23:41:37.285274 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:41:37.285278 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:41:37.285283 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:41:37.285287 | orchestrator | 2025-05-13 23:41:37.285291 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-05-13 23:41:37.285295 | orchestrator | Tuesday 13 May 2025 23:39:06 +0000 (0:00:02.144) 0:04:05.719 *********** 2025-05-13 23:41:37.285299 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:41:37.285303 | orchestrator | 2025-05-13 23:41:37.285307 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-05-13 23:41:37.285312 | orchestrator | Tuesday 13 May 2025 23:39:07 +0000 (0:00:01.276) 0:04:06.996 *********** 2025-05-13 23:41:37.285320 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-13 23:41:37.285330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.285335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.285342 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-13 23:41:37.285350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.285354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.285363 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-13 23:41:37.285368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.285372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.285377 | orchestrator | 2025-05-13 23:41:37.285381 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-05-13 23:41:37.285385 | orchestrator | Tuesday 13 May 2025 23:39:12 +0000 (0:00:04.846) 0:04:11.843 *********** 2025-05-13 23:41:37.285396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-13 23:41:37.285405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.285409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.285414 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.285418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-13 23:41:37.285425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.285430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.285438 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.285445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-13 23:41:37.285450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.285455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.285459 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.285463 | orchestrator | 2025-05-13 23:41:37.285467 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-05-13 23:41:37.285471 | orchestrator | Tuesday 13 May 2025 23:39:13 +0000 (0:00:00.648) 0:04:12.491 *********** 2025-05-13 23:41:37.285476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-13 23:41:37.285480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-13 23:41:37.285485 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-13 23:41:37.285491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-13 23:41:37.285496 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.285500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-13 23:41:37.285507 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-13 23:41:37.285514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-13 23:41:37.285518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-13 23:41:37.285522 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.285527 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-13 23:41:37.285531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-13 23:41:37.285535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-13 23:41:37.285539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-13 23:41:37.285544 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.285548 | orchestrator | 2025-05-13 23:41:37.285552 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-05-13 23:41:37.285556 | orchestrator | Tuesday 13 May 2025 23:39:14 +0000 (0:00:00.861) 0:04:13.353 *********** 2025-05-13 23:41:37.285560 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:41:37.285564 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:41:37.285569 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:41:37.285573 | orchestrator | 2025-05-13 23:41:37.285577 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-05-13 23:41:37.285581 | orchestrator | Tuesday 13 May 2025 23:39:15 +0000 (0:00:01.703) 0:04:15.056 *********** 2025-05-13 23:41:37.285585 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:41:37.285589 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:41:37.285593 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:41:37.285597 | orchestrator | 2025-05-13 23:41:37.285601 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-05-13 23:41:37.285606 | orchestrator | Tuesday 13 May 2025 23:39:17 +0000 (0:00:01.813) 0:04:16.870 *********** 2025-05-13 23:41:37.285610 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:41:37.285614 | orchestrator | 2025-05-13 23:41:37.285618 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-05-13 23:41:37.285622 | orchestrator | Tuesday 13 May 2025 23:39:19 +0000 (0:00:01.287) 0:04:18.158 *********** 2025-05-13 23:41:37.285627 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-05-13 23:41:37.285631 | orchestrator | 2025-05-13 23:41:37.285635 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-05-13 23:41:37.285638 | orchestrator | Tuesday 13 May 2025 23:39:20 +0000 (0:00:00.998) 0:04:19.156 *********** 2025-05-13 23:41:37.285642 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-05-13 23:41:37.285653 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-05-13 23:41:37.285658 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-05-13 23:41:37.285662 | orchestrator | 2025-05-13 23:41:37.285667 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-05-13 23:41:37.285671 | orchestrator | Tuesday 13 May 2025 23:39:24 +0000 (0:00:04.101) 0:04:23.257 *********** 2025-05-13 23:41:37.285675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-13 23:41:37.285679 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.285683 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-13 23:41:37.285687 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.285691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-13 23:41:37.285709 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.285713 | orchestrator | 2025-05-13 23:41:37.285717 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-05-13 23:41:37.285720 | orchestrator | Tuesday 13 May 2025 23:39:25 +0000 (0:00:01.410) 0:04:24.668 *********** 2025-05-13 23:41:37.285724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-13 23:41:37.285728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-13 23:41:37.285735 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.285739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-13 23:41:37.285743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-13 23:41:37.285747 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.285751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-13 23:41:37.285757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-13 23:41:37.285761 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.285765 | orchestrator | 2025-05-13 23:41:37.285768 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-05-13 23:41:37.285772 | orchestrator | Tuesday 13 May 2025 23:39:27 +0000 (0:00:01.686) 0:04:26.355 *********** 2025-05-13 23:41:37.285776 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:41:37.285780 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:41:37.285783 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:41:37.285787 | orchestrator | 2025-05-13 23:41:37.285792 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-05-13 23:41:37.285798 | orchestrator | Tuesday 13 May 2025 23:39:29 +0000 (0:00:02.412) 0:04:28.767 *********** 2025-05-13 23:41:37.285804 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:41:37.285810 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:41:37.285816 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:41:37.285821 | orchestrator | 2025-05-13 23:41:37.285830 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-05-13 23:41:37.285836 | orchestrator | Tuesday 13 May 2025 23:39:32 +0000 (0:00:03.112) 0:04:31.880 *********** 2025-05-13 23:41:37.285842 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-05-13 23:41:37.285848 | orchestrator | 2025-05-13 23:41:37.285854 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-05-13 23:41:37.285860 | orchestrator | Tuesday 13 May 2025 23:39:33 +0000 (0:00:00.851) 0:04:32.732 *********** 2025-05-13 23:41:37.285866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-13 23:41:37.285873 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.285879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-13 23:41:37.285890 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.285894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-13 23:41:37.285898 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.285902 | orchestrator | 2025-05-13 23:41:37.285906 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-05-13 23:41:37.285909 | orchestrator | Tuesday 13 May 2025 23:39:34 +0000 (0:00:01.293) 0:04:34.025 *********** 2025-05-13 23:41:37.285913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-13 23:41:37.285917 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.285924 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-13 23:41:37.285928 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.285932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-13 23:41:37.285936 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.285939 | orchestrator | 2025-05-13 23:41:37.286032 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-05-13 23:41:37.286041 | orchestrator | Tuesday 13 May 2025 23:39:36 +0000 (0:00:01.294) 0:04:35.320 *********** 2025-05-13 23:41:37.286047 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.286053 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.286060 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.286066 | orchestrator | 2025-05-13 23:41:37.286073 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-05-13 23:41:37.286080 | orchestrator | Tuesday 13 May 2025 23:39:37 +0000 (0:00:01.431) 0:04:36.752 *********** 2025-05-13 23:41:37.286086 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:41:37.286093 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:41:37.286099 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:41:37.286106 | orchestrator | 2025-05-13 23:41:37.286113 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-05-13 23:41:37.286126 | orchestrator | Tuesday 13 May 2025 23:39:39 +0000 (0:00:02.298) 0:04:39.051 *********** 2025-05-13 23:41:37.286133 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:41:37.286139 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:41:37.286146 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:41:37.286150 | orchestrator | 2025-05-13 23:41:37.286154 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-05-13 23:41:37.286158 | orchestrator | Tuesday 13 May 2025 23:39:42 +0000 (0:00:02.640) 0:04:41.691 *********** 2025-05-13 23:41:37.286162 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-05-13 23:41:37.286166 | orchestrator | 2025-05-13 23:41:37.286169 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-05-13 23:41:37.286173 | orchestrator | Tuesday 13 May 2025 23:39:43 +0000 (0:00:01.155) 0:04:42.847 *********** 2025-05-13 23:41:37.286177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-13 23:41:37.286181 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.286185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-13 23:41:37.286189 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.286193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-13 23:41:37.286197 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.286200 | orchestrator | 2025-05-13 23:41:37.286204 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-05-13 23:41:37.286208 | orchestrator | Tuesday 13 May 2025 23:39:45 +0000 (0:00:01.313) 0:04:44.161 *********** 2025-05-13 23:41:37.286215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-13 23:41:37.286219 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.286228 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-13 23:41:37.286236 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.286240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-13 23:41:37.286244 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.286248 | orchestrator | 2025-05-13 23:41:37.286251 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-05-13 23:41:37.286255 | orchestrator | Tuesday 13 May 2025 23:39:46 +0000 (0:00:01.364) 0:04:45.525 *********** 2025-05-13 23:41:37.286259 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.286262 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.286266 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.286270 | orchestrator | 2025-05-13 23:41:37.286273 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-05-13 23:41:37.286277 | orchestrator | Tuesday 13 May 2025 23:39:48 +0000 (0:00:01.565) 0:04:47.091 *********** 2025-05-13 23:41:37.286280 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:41:37.286284 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:41:37.286288 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:41:37.286291 | orchestrator | 2025-05-13 23:41:37.286295 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-05-13 23:41:37.286299 | orchestrator | Tuesday 13 May 2025 23:39:50 +0000 (0:00:02.416) 0:04:49.507 *********** 2025-05-13 23:41:37.286303 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:41:37.286306 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:41:37.286310 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:41:37.286314 | orchestrator | 2025-05-13 23:41:37.286318 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-05-13 23:41:37.286321 | orchestrator | Tuesday 13 May 2025 23:39:53 +0000 (0:00:03.257) 0:04:52.765 *********** 2025-05-13 23:41:37.286325 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:41:37.286328 | orchestrator | 2025-05-13 23:41:37.286332 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-05-13 23:41:37.286336 | orchestrator | Tuesday 13 May 2025 23:39:55 +0000 (0:00:01.672) 0:04:54.437 *********** 2025-05-13 23:41:37.286340 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-13 23:41:37.286349 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-13 23:41:37.286360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-13 23:41:37.286364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-13 23:41:37.286368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-13 23:41:37.286372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-13 23:41:37.286376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-13 23:41:37.286380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.286391 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-13 23:41:37.286397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.286401 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-13 23:41:37.286405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-13 23:41:37.286409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-13 23:41:37.286413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-13 23:41:37.286424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.286428 | orchestrator | 2025-05-13 23:41:37.286432 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-05-13 23:41:37.286436 | orchestrator | Tuesday 13 May 2025 23:39:58 +0000 (0:00:03.380) 0:04:57.818 *********** 2025-05-13 23:41:37.286441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-13 23:41:37.286446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-13 23:41:37.286450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-13 23:41:37.286454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-13 23:41:37.286458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.286465 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.286469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-13 23:41:37.286516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-13 23:41:37.286527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-13 23:41:37.286531 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-13 23:41:37.286535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.286539 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.286543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-13 23:41:37.286554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-13 23:41:37.286561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-13 23:41:37.286565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-13 23:41:37.286569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-13 23:41:37.286573 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.286577 | orchestrator | 2025-05-13 23:41:37.286580 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-05-13 23:41:37.286584 | orchestrator | Tuesday 13 May 2025 23:39:59 +0000 (0:00:00.729) 0:04:58.547 *********** 2025-05-13 23:41:37.286588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-13 23:41:37.286592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-13 23:41:37.286596 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.286604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-13 23:41:37.286607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-13 23:41:37.286611 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.286615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-13 23:41:37.286619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-13 23:41:37.286622 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.286626 | orchestrator | 2025-05-13 23:41:37.286630 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-05-13 23:41:37.286634 | orchestrator | Tuesday 13 May 2025 23:40:01 +0000 (0:00:01.531) 0:05:00.079 *********** 2025-05-13 23:41:37.286637 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:41:37.286641 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:41:37.286644 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:41:37.286648 | orchestrator | 2025-05-13 23:41:37.286654 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-05-13 23:41:37.286658 | orchestrator | Tuesday 13 May 2025 23:40:02 +0000 (0:00:01.544) 0:05:01.623 *********** 2025-05-13 23:41:37.286662 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:41:37.286665 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:41:37.286669 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:41:37.286672 | orchestrator | 2025-05-13 23:41:37.286676 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-05-13 23:41:37.286680 | orchestrator | Tuesday 13 May 2025 23:40:04 +0000 (0:00:02.163) 0:05:03.787 *********** 2025-05-13 23:41:37.286683 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:41:37.286687 | orchestrator | 2025-05-13 23:41:37.286691 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-05-13 23:41:37.286694 | orchestrator | Tuesday 13 May 2025 23:40:06 +0000 (0:00:01.635) 0:05:05.422 *********** 2025-05-13 23:41:37.286714 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-13 23:41:37.286719 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-13 23:41:37.286727 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-13 23:41:37.286734 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-13 23:41:37.286741 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-13 23:41:37.286746 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-13 23:41:37.286755 | orchestrator | 2025-05-13 23:41:37.286759 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-05-13 23:41:37.286762 | orchestrator | Tuesday 13 May 2025 23:40:11 +0000 (0:00:05.227) 0:05:10.650 *********** 2025-05-13 23:41:37.286766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-13 23:41:37.286773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-13 23:41:37.286777 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.286783 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-13 23:41:37.286787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-13 23:41:37.286795 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.286799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-13 23:41:37.286803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-13 23:41:37.286807 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.286811 | orchestrator | 2025-05-13 23:41:37.286817 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-05-13 23:41:37.286821 | orchestrator | Tuesday 13 May 2025 23:40:12 +0000 (0:00:00.649) 0:05:11.299 *********** 2025-05-13 23:41:37.286825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-05-13 23:41:37.286829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-13 23:41:37.286833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-13 23:41:37.286839 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.286843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-05-13 23:41:37.286846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-13 23:41:37.286854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-13 23:41:37.286858 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.286862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-05-13 23:41:37.286865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-13 23:41:37.286869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-13 23:41:37.286873 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.286877 | orchestrator | 2025-05-13 23:41:37.286881 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-05-13 23:41:37.286884 | orchestrator | Tuesday 13 May 2025 23:40:13 +0000 (0:00:01.249) 0:05:12.549 *********** 2025-05-13 23:41:37.286888 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.286892 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.286895 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.286899 | orchestrator | 2025-05-13 23:41:37.286903 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-05-13 23:41:37.286907 | orchestrator | Tuesday 13 May 2025 23:40:13 +0000 (0:00:00.479) 0:05:13.028 *********** 2025-05-13 23:41:37.286910 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.286914 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.286918 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.286921 | orchestrator | 2025-05-13 23:41:37.286925 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-05-13 23:41:37.286929 | orchestrator | Tuesday 13 May 2025 23:40:15 +0000 (0:00:01.346) 0:05:14.375 *********** 2025-05-13 23:41:37.286932 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:41:37.286936 | orchestrator | 2025-05-13 23:41:37.286940 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-05-13 23:41:37.286943 | orchestrator | Tuesday 13 May 2025 23:40:16 +0000 (0:00:01.466) 0:05:15.841 *********** 2025-05-13 23:41:37.286950 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-13 23:41:37.286954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-13 23:41:37.286968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 23:41:37.286975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 23:41:37.286981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-13 23:41:37.286987 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-13 23:41:37.286993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-13 23:41:37.286999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 23:41:37.287011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 23:41:37.287017 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-13 23:41:37.287032 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-13 23:41:37.287039 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-13 23:41:37.287045 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 23:41:37.287051 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 23:41:37.287057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-13 23:41:37.287066 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-13 23:41:37.287077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-13 23:41:37.287081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 23:41:37.287085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 23:41:37.287089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-13 23:41:37.287093 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-13 23:41:37.287099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-13 23:41:37.287109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 23:41:37.287113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 23:41:37.287117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-13 23:41:37.287121 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-13 23:41:37.287125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-13 23:41:37.287135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 23:41:37.287141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 23:41:37.287145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-13 23:41:37.287149 | orchestrator | 2025-05-13 23:41:37.287152 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-05-13 23:41:37.287156 | orchestrator | Tuesday 13 May 2025 23:40:21 +0000 (0:00:04.655) 0:05:20.497 *********** 2025-05-13 23:41:37.287160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-13 23:41:37.287164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-13 23:41:37.287168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 23:41:37.287172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 23:41:37.287182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-13 23:41:37.287230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-13 23:41:37.287238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-13 23:41:37.287245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-13 23:41:37.287251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-13 23:41:37.287258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 23:41:37.287273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 23:41:37.287280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 23:41:37.287292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 23:41:37.287299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-13 23:41:37.287305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-13 23:41:37.287311 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.287317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-13 23:41:37.287328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-13 23:41:37.287332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 23:41:37.287338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 23:41:37.287342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-13 23:41:37.287346 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.287350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-13 23:41:37.287354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-13 23:41:37.287358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 23:41:37.287365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 23:41:37.287373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-13 23:41:37.287380 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-13 23:41:37.287384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-13 23:41:37.287388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 23:41:37.287392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 23:41:37.287403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-13 23:41:37.287407 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.287410 | orchestrator | 2025-05-13 23:41:37.287414 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-05-13 23:41:37.287418 | orchestrator | Tuesday 13 May 2025 23:40:22 +0000 (0:00:00.790) 0:05:21.287 *********** 2025-05-13 23:41:37.287424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-05-13 23:41:37.287428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-05-13 23:41:37.287432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-05-13 23:41:37.287436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-05-13 23:41:37.287442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-13 23:41:37.287446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-13 23:41:37.287450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-13 23:41:37.287455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-13 23:41:37.287459 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.287463 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.287466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-05-13 23:41:37.287470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-05-13 23:41:37.287478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-13 23:41:37.287482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-13 23:41:37.287486 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.287489 | orchestrator | 2025-05-13 23:41:37.287493 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-05-13 23:41:37.287497 | orchestrator | Tuesday 13 May 2025 23:40:23 +0000 (0:00:00.936) 0:05:22.223 *********** 2025-05-13 23:41:37.287501 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.287504 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.287508 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.287512 | orchestrator | 2025-05-13 23:41:37.287516 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-05-13 23:41:37.287519 | orchestrator | Tuesday 13 May 2025 23:40:23 +0000 (0:00:00.829) 0:05:23.053 *********** 2025-05-13 23:41:37.287523 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.287527 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.287530 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.287534 | orchestrator | 2025-05-13 23:41:37.287538 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-05-13 23:41:37.287541 | orchestrator | Tuesday 13 May 2025 23:40:25 +0000 (0:00:01.383) 0:05:24.436 *********** 2025-05-13 23:41:37.287545 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:41:37.287549 | orchestrator | 2025-05-13 23:41:37.287553 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-05-13 23:41:37.287556 | orchestrator | Tuesday 13 May 2025 23:40:26 +0000 (0:00:01.442) 0:05:25.879 *********** 2025-05-13 23:41:37.287564 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-13 23:41:37.287569 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-13 23:41:37.287577 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-13 23:41:37.287581 | orchestrator | 2025-05-13 23:41:37.287585 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-05-13 23:41:37.287588 | orchestrator | Tuesday 13 May 2025 23:40:29 +0000 (0:00:03.125) 0:05:29.005 *********** 2025-05-13 23:41:37.287592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-05-13 23:41:37.287600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-05-13 23:41:37.287605 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.287609 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.287612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-05-13 23:41:37.287620 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.287624 | orchestrator | 2025-05-13 23:41:37.287627 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-05-13 23:41:37.287631 | orchestrator | Tuesday 13 May 2025 23:40:30 +0000 (0:00:00.517) 0:05:29.523 *********** 2025-05-13 23:41:37.287635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-05-13 23:41:37.287639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-05-13 23:41:37.287643 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.287647 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.287650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-05-13 23:41:37.287654 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.287658 | orchestrator | 2025-05-13 23:41:37.287662 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-05-13 23:41:37.287665 | orchestrator | Tuesday 13 May 2025 23:40:31 +0000 (0:00:00.703) 0:05:30.226 *********** 2025-05-13 23:41:37.287669 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.287673 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.287676 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.287680 | orchestrator | 2025-05-13 23:41:37.287684 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-05-13 23:41:37.287688 | orchestrator | Tuesday 13 May 2025 23:40:32 +0000 (0:00:01.232) 0:05:31.459 *********** 2025-05-13 23:41:37.287691 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.287725 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.287729 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.287733 | orchestrator | 2025-05-13 23:41:37.287737 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-05-13 23:41:37.287741 | orchestrator | Tuesday 13 May 2025 23:40:33 +0000 (0:00:01.454) 0:05:32.914 *********** 2025-05-13 23:41:37.287744 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:41:37.287748 | orchestrator | 2025-05-13 23:41:37.287751 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-05-13 23:41:37.287755 | orchestrator | Tuesday 13 May 2025 23:40:35 +0000 (0:00:01.511) 0:05:34.425 *********** 2025-05-13 23:41:37.287762 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-05-13 23:41:37.287773 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-05-13 23:41:37.287778 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-05-13 23:41:37.287782 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-05-13 23:41:37.287789 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-05-13 23:41:37.287795 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-05-13 23:41:37.287803 | orchestrator | 2025-05-13 23:41:37.287807 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-05-13 23:41:37.287810 | orchestrator | Tuesday 13 May 2025 23:40:41 +0000 (0:00:06.409) 0:05:40.835 *********** 2025-05-13 23:41:37.287816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-05-13 23:41:37.287821 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-05-13 23:41:37.287824 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.287828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-05-13 23:41:37.287837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-05-13 23:41:37.287845 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.287848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-05-13 23:41:37.287852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-05-13 23:41:37.287856 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.287860 | orchestrator | 2025-05-13 23:41:37.287864 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-05-13 23:41:37.287867 | orchestrator | Tuesday 13 May 2025 23:40:42 +0000 (0:00:01.037) 0:05:41.872 *********** 2025-05-13 23:41:37.287871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-13 23:41:37.287875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-13 23:41:37.287879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-13 23:41:37.287882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-13 23:41:37.287890 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.287897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-13 23:41:37.287901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-13 23:41:37.287905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-13 23:41:37.287909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-13 23:41:37.287913 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.287918 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-13 23:41:37.287922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-13 23:41:37.287926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-13 23:41:37.287930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-13 23:41:37.287934 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.287937 | orchestrator | 2025-05-13 23:41:37.287941 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-05-13 23:41:37.287945 | orchestrator | Tuesday 13 May 2025 23:40:43 +0000 (0:00:00.988) 0:05:42.861 *********** 2025-05-13 23:41:37.287948 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:41:37.287952 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:41:37.287956 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:41:37.287959 | orchestrator | 2025-05-13 23:41:37.287963 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-05-13 23:41:37.287967 | orchestrator | Tuesday 13 May 2025 23:40:45 +0000 (0:00:01.341) 0:05:44.202 *********** 2025-05-13 23:41:37.287970 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:41:37.287974 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:41:37.287978 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:41:37.287981 | orchestrator | 2025-05-13 23:41:37.287985 | orchestrator | TASK [include_role : swift] **************************************************** 2025-05-13 23:41:37.287989 | orchestrator | Tuesday 13 May 2025 23:40:47 +0000 (0:00:02.304) 0:05:46.507 *********** 2025-05-13 23:41:37.287992 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.287996 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.287999 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.288003 | orchestrator | 2025-05-13 23:41:37.288007 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-05-13 23:41:37.288010 | orchestrator | Tuesday 13 May 2025 23:40:48 +0000 (0:00:00.685) 0:05:47.192 *********** 2025-05-13 23:41:37.288014 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.288018 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.288021 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.288025 | orchestrator | 2025-05-13 23:41:37.288028 | orchestrator | TASK [include_role : trove] **************************************************** 2025-05-13 23:41:37.288036 | orchestrator | Tuesday 13 May 2025 23:40:48 +0000 (0:00:00.358) 0:05:47.550 *********** 2025-05-13 23:41:37.288039 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.288043 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.288047 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.288051 | orchestrator | 2025-05-13 23:41:37.288054 | orchestrator | TASK [include_role : venus] **************************************************** 2025-05-13 23:41:37.288058 | orchestrator | Tuesday 13 May 2025 23:40:48 +0000 (0:00:00.327) 0:05:47.878 *********** 2025-05-13 23:41:37.288062 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.288065 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.288069 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.288072 | orchestrator | 2025-05-13 23:41:37.288076 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-05-13 23:41:37.288080 | orchestrator | Tuesday 13 May 2025 23:40:49 +0000 (0:00:00.309) 0:05:48.188 *********** 2025-05-13 23:41:37.288083 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.288087 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.288091 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.288094 | orchestrator | 2025-05-13 23:41:37.288098 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-05-13 23:41:37.288102 | orchestrator | Tuesday 13 May 2025 23:40:49 +0000 (0:00:00.685) 0:05:48.873 *********** 2025-05-13 23:41:37.288105 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.288109 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.288113 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.288116 | orchestrator | 2025-05-13 23:41:37.288120 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-05-13 23:41:37.288126 | orchestrator | Tuesday 13 May 2025 23:40:50 +0000 (0:00:00.616) 0:05:49.490 *********** 2025-05-13 23:41:37.288130 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:41:37.288134 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:41:37.288138 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:41:37.288144 | orchestrator | 2025-05-13 23:41:37.288150 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-05-13 23:41:37.288157 | orchestrator | Tuesday 13 May 2025 23:40:51 +0000 (0:00:00.687) 0:05:50.177 *********** 2025-05-13 23:41:37.288163 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:41:37.288169 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:41:37.288175 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:41:37.288182 | orchestrator | 2025-05-13 23:41:37.288188 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-05-13 23:41:37.288194 | orchestrator | Tuesday 13 May 2025 23:40:51 +0000 (0:00:00.713) 0:05:50.891 *********** 2025-05-13 23:41:37.288201 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:41:37.288208 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:41:37.288215 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:41:37.288221 | orchestrator | 2025-05-13 23:41:37.288228 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-05-13 23:41:37.288232 | orchestrator | Tuesday 13 May 2025 23:40:52 +0000 (0:00:00.921) 0:05:51.813 *********** 2025-05-13 23:41:37.288235 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:41:37.288239 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:41:37.288246 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:41:37.288250 | orchestrator | 2025-05-13 23:41:37.288254 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-05-13 23:41:37.288257 | orchestrator | Tuesday 13 May 2025 23:40:53 +0000 (0:00:00.910) 0:05:52.724 *********** 2025-05-13 23:41:37.288261 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:41:37.288264 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:41:37.288268 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:41:37.288272 | orchestrator | 2025-05-13 23:41:37.288275 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-05-13 23:41:37.288279 | orchestrator | Tuesday 13 May 2025 23:40:54 +0000 (0:00:00.953) 0:05:53.677 *********** 2025-05-13 23:41:37.288287 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:41:37.288291 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:41:37.288295 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:41:37.288298 | orchestrator | 2025-05-13 23:41:37.288302 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-05-13 23:41:37.288306 | orchestrator | Tuesday 13 May 2025 23:41:04 +0000 (0:00:10.200) 0:06:03.878 *********** 2025-05-13 23:41:37.288309 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:41:37.288313 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:41:37.288317 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:41:37.288320 | orchestrator | 2025-05-13 23:41:37.288324 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-05-13 23:41:37.288327 | orchestrator | Tuesday 13 May 2025 23:41:05 +0000 (0:00:00.792) 0:06:04.670 *********** 2025-05-13 23:41:37.288331 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:41:37.288335 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:41:37.288338 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:41:37.288342 | orchestrator | 2025-05-13 23:41:37.288346 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-05-13 23:41:37.288349 | orchestrator | Tuesday 13 May 2025 23:41:19 +0000 (0:00:13.913) 0:06:18.583 *********** 2025-05-13 23:41:37.288353 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:41:37.288357 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:41:37.288360 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:41:37.288364 | orchestrator | 2025-05-13 23:41:37.288369 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-05-13 23:41:37.288374 | orchestrator | Tuesday 13 May 2025 23:41:20 +0000 (0:00:00.715) 0:06:19.299 *********** 2025-05-13 23:41:37.288380 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:41:37.288387 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:41:37.288393 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:41:37.288399 | orchestrator | 2025-05-13 23:41:37.288405 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-05-13 23:41:37.288411 | orchestrator | Tuesday 13 May 2025 23:41:29 +0000 (0:00:09.381) 0:06:28.680 *********** 2025-05-13 23:41:37.288417 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.288424 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.288430 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.288437 | orchestrator | 2025-05-13 23:41:37.288442 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-05-13 23:41:37.288449 | orchestrator | Tuesday 13 May 2025 23:41:29 +0000 (0:00:00.345) 0:06:29.025 *********** 2025-05-13 23:41:37.288455 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.288458 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.288462 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.288466 | orchestrator | 2025-05-13 23:41:37.288469 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-05-13 23:41:37.288473 | orchestrator | Tuesday 13 May 2025 23:41:30 +0000 (0:00:00.364) 0:06:29.390 *********** 2025-05-13 23:41:37.288477 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.288480 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.288484 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.288487 | orchestrator | 2025-05-13 23:41:37.288491 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-05-13 23:41:37.288494 | orchestrator | Tuesday 13 May 2025 23:41:30 +0000 (0:00:00.348) 0:06:29.739 *********** 2025-05-13 23:41:37.288498 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.288502 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.288505 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.288509 | orchestrator | 2025-05-13 23:41:37.288513 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-05-13 23:41:37.288516 | orchestrator | Tuesday 13 May 2025 23:41:31 +0000 (0:00:00.758) 0:06:30.497 *********** 2025-05-13 23:41:37.288526 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.288530 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.288533 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.288537 | orchestrator | 2025-05-13 23:41:37.288540 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-05-13 23:41:37.288544 | orchestrator | Tuesday 13 May 2025 23:41:31 +0000 (0:00:00.358) 0:06:30.855 *********** 2025-05-13 23:41:37.288548 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:41:37.288555 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:41:37.288558 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:41:37.288562 | orchestrator | 2025-05-13 23:41:37.288566 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-05-13 23:41:37.288569 | orchestrator | Tuesday 13 May 2025 23:41:32 +0000 (0:00:00.361) 0:06:31.216 *********** 2025-05-13 23:41:37.288573 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:41:37.288577 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:41:37.288580 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:41:37.288584 | orchestrator | 2025-05-13 23:41:37.288587 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-05-13 23:41:37.288591 | orchestrator | Tuesday 13 May 2025 23:41:33 +0000 (0:00:00.913) 0:06:32.129 *********** 2025-05-13 23:41:37.288595 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:41:37.288598 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:41:37.288602 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:41:37.288606 | orchestrator | 2025-05-13 23:41:37.288609 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 23:41:37.288613 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-05-13 23:41:37.288620 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-05-13 23:41:37.288624 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-05-13 23:41:37.288628 | orchestrator | 2025-05-13 23:41:37.288631 | orchestrator | 2025-05-13 23:41:37.288635 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 23:41:37.288639 | orchestrator | Tuesday 13 May 2025 23:41:34 +0000 (0:00:01.366) 0:06:33.496 *********** 2025-05-13 23:41:37.288642 | orchestrator | =============================================================================== 2025-05-13 23:41:37.288646 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 13.91s 2025-05-13 23:41:37.288650 | orchestrator | loadbalancer : Start backup haproxy container -------------------------- 10.20s 2025-05-13 23:41:37.288653 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.38s 2025-05-13 23:41:37.288657 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.41s 2025-05-13 23:41:37.288660 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 6.00s 2025-05-13 23:41:37.288664 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 5.25s 2025-05-13 23:41:37.288668 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.23s 2025-05-13 23:41:37.288671 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 5.06s 2025-05-13 23:41:37.288675 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.85s 2025-05-13 23:41:37.288679 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.66s 2025-05-13 23:41:37.288682 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.32s 2025-05-13 23:41:37.288686 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.22s 2025-05-13 23:41:37.288690 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.10s 2025-05-13 23:41:37.288693 | orchestrator | loadbalancer : Copying over config.json files for services -------------- 4.09s 2025-05-13 23:41:37.288716 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.09s 2025-05-13 23:41:37.288720 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 3.91s 2025-05-13 23:41:37.288724 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 3.89s 2025-05-13 23:41:37.288727 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 3.80s 2025-05-13 23:41:37.288731 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 3.75s 2025-05-13 23:41:37.288734 | orchestrator | haproxy-config : Copying over grafana haproxy config -------------------- 3.75s 2025-05-13 23:41:37.288738 | orchestrator | 2025-05-13 23:41:37 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:41:37.288742 | orchestrator | 2025-05-13 23:41:37 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:41:37.288746 | orchestrator | 2025-05-13 23:41:37 | INFO  | Task 431a9fc2-c86e-4eb3-8b59-dfef1748524e is in state STARTED 2025-05-13 23:41:37.288750 | orchestrator | 2025-05-13 23:41:37 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:41:40.333353 | orchestrator | 2025-05-13 23:41:40 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:41:40.338526 | orchestrator | 2025-05-13 23:41:40 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:41:40.339647 | orchestrator | 2025-05-13 23:41:40 | INFO  | Task 431a9fc2-c86e-4eb3-8b59-dfef1748524e is in state STARTED 2025-05-13 23:41:40.339681 | orchestrator | 2025-05-13 23:41:40 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:41:43.382934 | orchestrator | 2025-05-13 23:41:43 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:41:43.383446 | orchestrator | 2025-05-13 23:41:43 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:41:43.384376 | orchestrator | 2025-05-13 23:41:43 | INFO  | Task 431a9fc2-c86e-4eb3-8b59-dfef1748524e is in state STARTED 2025-05-13 23:41:43.384433 | orchestrator | 2025-05-13 23:41:43 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:41:46.420550 | orchestrator | 2025-05-13 23:41:46 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:41:46.421247 | orchestrator | 2025-05-13 23:41:46 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:41:46.422354 | orchestrator | 2025-05-13 23:41:46 | INFO  | Task 431a9fc2-c86e-4eb3-8b59-dfef1748524e is in state STARTED 2025-05-13 23:41:46.422392 | orchestrator | 2025-05-13 23:41:46 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:41:49.472124 | orchestrator | 2025-05-13 23:41:49 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:41:49.474381 | orchestrator | 2025-05-13 23:41:49 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:41:49.474867 | orchestrator | 2025-05-13 23:41:49 | INFO  | Task 431a9fc2-c86e-4eb3-8b59-dfef1748524e is in state STARTED 2025-05-13 23:41:49.474989 | orchestrator | 2025-05-13 23:41:49 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:41:52.522804 | orchestrator | 2025-05-13 23:41:52 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:41:52.522924 | orchestrator | 2025-05-13 23:41:52 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:41:52.523642 | orchestrator | 2025-05-13 23:41:52 | INFO  | Task 431a9fc2-c86e-4eb3-8b59-dfef1748524e is in state STARTED 2025-05-13 23:41:52.523768 | orchestrator | 2025-05-13 23:41:52 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:41:55.566330 | orchestrator | 2025-05-13 23:41:55 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:41:55.566488 | orchestrator | 2025-05-13 23:41:55 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:41:55.566506 | orchestrator | 2025-05-13 23:41:55 | INFO  | Task 431a9fc2-c86e-4eb3-8b59-dfef1748524e is in state STARTED 2025-05-13 23:41:55.566518 | orchestrator | 2025-05-13 23:41:55 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:41:58.598527 | orchestrator | 2025-05-13 23:41:58 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:41:58.598664 | orchestrator | 2025-05-13 23:41:58 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:41:58.598682 | orchestrator | 2025-05-13 23:41:58 | INFO  | Task 617745cb-8d8c-463f-aeba-ff3bad4d504c is in state STARTED 2025-05-13 23:41:58.600021 | orchestrator | 2025-05-13 23:41:58 | INFO  | Task 431a9fc2-c86e-4eb3-8b59-dfef1748524e is in state STARTED 2025-05-13 23:41:58.600061 | orchestrator | 2025-05-13 23:41:58 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:42:01.641483 | orchestrator | 2025-05-13 23:42:01 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:42:01.641746 | orchestrator | 2025-05-13 23:42:01 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:42:01.644828 | orchestrator | 2025-05-13 23:42:01 | INFO  | Task 617745cb-8d8c-463f-aeba-ff3bad4d504c is in state STARTED 2025-05-13 23:42:01.645205 | orchestrator | 2025-05-13 23:42:01 | INFO  | Task 431a9fc2-c86e-4eb3-8b59-dfef1748524e is in state STARTED 2025-05-13 23:42:01.645231 | orchestrator | 2025-05-13 23:42:01 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:42:04.678462 | orchestrator | 2025-05-13 23:42:04 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:42:04.679660 | orchestrator | 2025-05-13 23:42:04 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:42:04.681456 | orchestrator | 2025-05-13 23:42:04 | INFO  | Task 617745cb-8d8c-463f-aeba-ff3bad4d504c is in state STARTED 2025-05-13 23:42:04.682227 | orchestrator | 2025-05-13 23:42:04 | INFO  | Task 431a9fc2-c86e-4eb3-8b59-dfef1748524e is in state STARTED 2025-05-13 23:42:04.682478 | orchestrator | 2025-05-13 23:42:04 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:42:07.732933 | orchestrator | 2025-05-13 23:42:07 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:42:07.733905 | orchestrator | 2025-05-13 23:42:07 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:42:07.736738 | orchestrator | 2025-05-13 23:42:07 | INFO  | Task 617745cb-8d8c-463f-aeba-ff3bad4d504c is in state STARTED 2025-05-13 23:42:07.737586 | orchestrator | 2025-05-13 23:42:07 | INFO  | Task 431a9fc2-c86e-4eb3-8b59-dfef1748524e is in state STARTED 2025-05-13 23:42:07.737748 | orchestrator | 2025-05-13 23:42:07 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:42:10.784012 | orchestrator | 2025-05-13 23:42:10 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:42:10.786905 | orchestrator | 2025-05-13 23:42:10 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:42:10.787775 | orchestrator | 2025-05-13 23:42:10 | INFO  | Task 617745cb-8d8c-463f-aeba-ff3bad4d504c is in state STARTED 2025-05-13 23:42:10.788803 | orchestrator | 2025-05-13 23:42:10 | INFO  | Task 431a9fc2-c86e-4eb3-8b59-dfef1748524e is in state STARTED 2025-05-13 23:42:10.788911 | orchestrator | 2025-05-13 23:42:10 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:42:13.838402 | orchestrator | 2025-05-13 23:42:13 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:42:13.840020 | orchestrator | 2025-05-13 23:42:13 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:42:13.849785 | orchestrator | 2025-05-13 23:42:13 | INFO  | Task 617745cb-8d8c-463f-aeba-ff3bad4d504c is in state STARTED 2025-05-13 23:42:13.849868 | orchestrator | 2025-05-13 23:42:13 | INFO  | Task 431a9fc2-c86e-4eb3-8b59-dfef1748524e is in state STARTED 2025-05-13 23:42:13.849883 | orchestrator | 2025-05-13 23:42:13 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:42:16.896794 | orchestrator | 2025-05-13 23:42:16 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:42:16.899269 | orchestrator | 2025-05-13 23:42:16 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:42:16.901068 | orchestrator | 2025-05-13 23:42:16 | INFO  | Task 617745cb-8d8c-463f-aeba-ff3bad4d504c is in state SUCCESS 2025-05-13 23:42:16.902915 | orchestrator | 2025-05-13 23:42:16 | INFO  | Task 431a9fc2-c86e-4eb3-8b59-dfef1748524e is in state STARTED 2025-05-13 23:42:16.903069 | orchestrator | 2025-05-13 23:42:16 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:42:19.955242 | orchestrator | 2025-05-13 23:42:19 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:42:19.956556 | orchestrator | 2025-05-13 23:42:19 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:42:19.958550 | orchestrator | 2025-05-13 23:42:19 | INFO  | Task 431a9fc2-c86e-4eb3-8b59-dfef1748524e is in state STARTED 2025-05-13 23:42:19.958757 | orchestrator | 2025-05-13 23:42:19 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:42:23.016757 | orchestrator | 2025-05-13 23:42:23 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:42:23.018772 | orchestrator | 2025-05-13 23:42:23 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:42:23.020605 | orchestrator | 2025-05-13 23:42:23 | INFO  | Task 431a9fc2-c86e-4eb3-8b59-dfef1748524e is in state STARTED 2025-05-13 23:42:23.020655 | orchestrator | 2025-05-13 23:42:23 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:42:26.076542 | orchestrator | 2025-05-13 23:42:26 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:42:26.078087 | orchestrator | 2025-05-13 23:42:26 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:42:26.079779 | orchestrator | 2025-05-13 23:42:26 | INFO  | Task 431a9fc2-c86e-4eb3-8b59-dfef1748524e is in state STARTED 2025-05-13 23:42:26.080101 | orchestrator | 2025-05-13 23:42:26 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:42:29.124609 | orchestrator | 2025-05-13 23:42:29 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:42:29.125478 | orchestrator | 2025-05-13 23:42:29 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:42:29.128228 | orchestrator | 2025-05-13 23:42:29 | INFO  | Task 431a9fc2-c86e-4eb3-8b59-dfef1748524e is in state STARTED 2025-05-13 23:42:29.128278 | orchestrator | 2025-05-13 23:42:29 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:42:32.169769 | orchestrator | 2025-05-13 23:42:32 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:42:32.170392 | orchestrator | 2025-05-13 23:42:32 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:42:32.171174 | orchestrator | 2025-05-13 23:42:32 | INFO  | Task 431a9fc2-c86e-4eb3-8b59-dfef1748524e is in state STARTED 2025-05-13 23:42:32.171223 | orchestrator | 2025-05-13 23:42:32 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:42:35.220584 | orchestrator | 2025-05-13 23:42:35 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:42:35.222324 | orchestrator | 2025-05-13 23:42:35 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:42:35.223714 | orchestrator | 2025-05-13 23:42:35 | INFO  | Task 431a9fc2-c86e-4eb3-8b59-dfef1748524e is in state STARTED 2025-05-13 23:42:35.223745 | orchestrator | 2025-05-13 23:42:35 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:42:38.268428 | orchestrator | 2025-05-13 23:42:38 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:42:38.269534 | orchestrator | 2025-05-13 23:42:38 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:42:38.271415 | orchestrator | 2025-05-13 23:42:38 | INFO  | Task 431a9fc2-c86e-4eb3-8b59-dfef1748524e is in state STARTED 2025-05-13 23:42:38.271471 | orchestrator | 2025-05-13 23:42:38 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:42:41.326356 | orchestrator | 2025-05-13 23:42:41 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:42:41.327522 | orchestrator | 2025-05-13 23:42:41 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:42:41.329481 | orchestrator | 2025-05-13 23:42:41 | INFO  | Task 431a9fc2-c86e-4eb3-8b59-dfef1748524e is in state STARTED 2025-05-13 23:42:41.329519 | orchestrator | 2025-05-13 23:42:41 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:42:44.384289 | orchestrator | 2025-05-13 23:42:44 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:42:44.388011 | orchestrator | 2025-05-13 23:42:44 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:42:44.390162 | orchestrator | 2025-05-13 23:42:44 | INFO  | Task 431a9fc2-c86e-4eb3-8b59-dfef1748524e is in state STARTED 2025-05-13 23:42:44.390222 | orchestrator | 2025-05-13 23:42:44 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:42:47.442841 | orchestrator | 2025-05-13 23:42:47 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:42:47.446083 | orchestrator | 2025-05-13 23:42:47 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:42:47.449186 | orchestrator | 2025-05-13 23:42:47 | INFO  | Task 431a9fc2-c86e-4eb3-8b59-dfef1748524e is in state STARTED 2025-05-13 23:42:47.449241 | orchestrator | 2025-05-13 23:42:47 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:42:50.493475 | orchestrator | 2025-05-13 23:42:50 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:42:50.495013 | orchestrator | 2025-05-13 23:42:50 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:42:50.496412 | orchestrator | 2025-05-13 23:42:50 | INFO  | Task 431a9fc2-c86e-4eb3-8b59-dfef1748524e is in state STARTED 2025-05-13 23:42:50.496496 | orchestrator | 2025-05-13 23:42:50 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:42:53.547652 | orchestrator | 2025-05-13 23:42:53 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:42:53.549562 | orchestrator | 2025-05-13 23:42:53 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:42:53.551716 | orchestrator | 2025-05-13 23:42:53 | INFO  | Task 431a9fc2-c86e-4eb3-8b59-dfef1748524e is in state STARTED 2025-05-13 23:42:53.551801 | orchestrator | 2025-05-13 23:42:53 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:42:56.602795 | orchestrator | 2025-05-13 23:42:56 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:42:56.604893 | orchestrator | 2025-05-13 23:42:56 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:42:56.608448 | orchestrator | 2025-05-13 23:42:56 | INFO  | Task 431a9fc2-c86e-4eb3-8b59-dfef1748524e is in state STARTED 2025-05-13 23:42:56.608499 | orchestrator | 2025-05-13 23:42:56 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:42:59.669850 | orchestrator | 2025-05-13 23:42:59 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:42:59.672823 | orchestrator | 2025-05-13 23:42:59 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:42:59.674337 | orchestrator | 2025-05-13 23:42:59 | INFO  | Task 431a9fc2-c86e-4eb3-8b59-dfef1748524e is in state STARTED 2025-05-13 23:42:59.675340 | orchestrator | 2025-05-13 23:42:59 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:43:02.723151 | orchestrator | 2025-05-13 23:43:02 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:43:02.723838 | orchestrator | 2025-05-13 23:43:02 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:43:02.725075 | orchestrator | 2025-05-13 23:43:02 | INFO  | Task 431a9fc2-c86e-4eb3-8b59-dfef1748524e is in state STARTED 2025-05-13 23:43:02.725105 | orchestrator | 2025-05-13 23:43:02 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:43:05.773006 | orchestrator | 2025-05-13 23:43:05 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:43:05.774586 | orchestrator | 2025-05-13 23:43:05 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:43:05.776377 | orchestrator | 2025-05-13 23:43:05 | INFO  | Task 431a9fc2-c86e-4eb3-8b59-dfef1748524e is in state STARTED 2025-05-13 23:43:05.776929 | orchestrator | 2025-05-13 23:43:05 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:43:08.834273 | orchestrator | 2025-05-13 23:43:08 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:43:08.837041 | orchestrator | 2025-05-13 23:43:08 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:43:08.838976 | orchestrator | 2025-05-13 23:43:08 | INFO  | Task 431a9fc2-c86e-4eb3-8b59-dfef1748524e is in state STARTED 2025-05-13 23:43:08.839063 | orchestrator | 2025-05-13 23:43:08 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:43:11.901264 | orchestrator | 2025-05-13 23:43:11 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:43:11.903461 | orchestrator | 2025-05-13 23:43:11 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:43:11.905723 | orchestrator | 2025-05-13 23:43:11 | INFO  | Task 431a9fc2-c86e-4eb3-8b59-dfef1748524e is in state STARTED 2025-05-13 23:43:11.905757 | orchestrator | 2025-05-13 23:43:11 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:43:14.974315 | orchestrator | 2025-05-13 23:43:14 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:43:14.977026 | orchestrator | 2025-05-13 23:43:14 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:43:14.983305 | orchestrator | 2025-05-13 23:43:14 | INFO  | Task 431a9fc2-c86e-4eb3-8b59-dfef1748524e is in state STARTED 2025-05-13 23:43:14.983428 | orchestrator | 2025-05-13 23:43:14 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:43:18.034728 | orchestrator | 2025-05-13 23:43:18 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:43:18.036068 | orchestrator | 2025-05-13 23:43:18 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:43:18.037208 | orchestrator | 2025-05-13 23:43:18 | INFO  | Task 431a9fc2-c86e-4eb3-8b59-dfef1748524e is in state STARTED 2025-05-13 23:43:18.037731 | orchestrator | 2025-05-13 23:43:18 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:43:21.091213 | orchestrator | 2025-05-13 23:43:21 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:43:21.092371 | orchestrator | 2025-05-13 23:43:21 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:43:21.094533 | orchestrator | 2025-05-13 23:43:21 | INFO  | Task 431a9fc2-c86e-4eb3-8b59-dfef1748524e is in state STARTED 2025-05-13 23:43:21.094568 | orchestrator | 2025-05-13 23:43:21 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:43:24.143502 | orchestrator | 2025-05-13 23:43:24 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:43:24.144701 | orchestrator | 2025-05-13 23:43:24 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:43:24.145696 | orchestrator | 2025-05-13 23:43:24 | INFO  | Task 431a9fc2-c86e-4eb3-8b59-dfef1748524e is in state STARTED 2025-05-13 23:43:24.146052 | orchestrator | 2025-05-13 23:43:24 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:43:27.198870 | orchestrator | 2025-05-13 23:43:27 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:43:27.201188 | orchestrator | 2025-05-13 23:43:27 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:43:27.203798 | orchestrator | 2025-05-13 23:43:27 | INFO  | Task 431a9fc2-c86e-4eb3-8b59-dfef1748524e is in state STARTED 2025-05-13 23:43:27.203841 | orchestrator | 2025-05-13 23:43:27 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:43:30.261231 | orchestrator | 2025-05-13 23:43:30 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:43:30.262221 | orchestrator | 2025-05-13 23:43:30 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:43:30.264101 | orchestrator | 2025-05-13 23:43:30 | INFO  | Task 431a9fc2-c86e-4eb3-8b59-dfef1748524e is in state STARTED 2025-05-13 23:43:30.264570 | orchestrator | 2025-05-13 23:43:30 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:43:33.306608 | orchestrator | 2025-05-13 23:43:33 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state STARTED 2025-05-13 23:43:33.307798 | orchestrator | 2025-05-13 23:43:33 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:43:33.310842 | orchestrator | 2025-05-13 23:43:33 | INFO  | Task 431a9fc2-c86e-4eb3-8b59-dfef1748524e is in state STARTED 2025-05-13 23:43:33.311407 | orchestrator | 2025-05-13 23:43:33 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:43:36.376930 | orchestrator | 2025-05-13 23:43:36 | INFO  | Task e6838759-dc51-4445-8ca4-ecc8c7941f72 is in state SUCCESS 2025-05-13 23:43:36.378851 | orchestrator | 2025-05-13 23:43:36.378932 | orchestrator | None 2025-05-13 23:43:36.378948 | orchestrator | 2025-05-13 23:43:36.378960 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-05-13 23:43:36.378973 | orchestrator | 2025-05-13 23:43:36.378985 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-05-13 23:43:36.378997 | orchestrator | Tuesday 13 May 2025 23:32:09 +0000 (0:00:00.895) 0:00:00.895 *********** 2025-05-13 23:43:36.379036 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:43:36.379069 | orchestrator | 2025-05-13 23:43:36.379081 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-05-13 23:43:36.379092 | orchestrator | Tuesday 13 May 2025 23:32:11 +0000 (0:00:01.249) 0:00:02.145 *********** 2025-05-13 23:43:36.379104 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.379117 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:43:36.379129 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:43:36.379140 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.379159 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:43:36.379170 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.379182 | orchestrator | 2025-05-13 23:43:36.379193 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-05-13 23:43:36.379205 | orchestrator | Tuesday 13 May 2025 23:32:12 +0000 (0:00:01.740) 0:00:03.885 *********** 2025-05-13 23:43:36.379216 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:43:36.379227 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:43:36.379238 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:43:36.379250 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.379261 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.379272 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.379290 | orchestrator | 2025-05-13 23:43:36.379302 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-05-13 23:43:36.379313 | orchestrator | Tuesday 13 May 2025 23:32:13 +0000 (0:00:01.032) 0:00:04.918 *********** 2025-05-13 23:43:36.379324 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:43:36.379335 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:43:36.379347 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:43:36.379358 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.379369 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.379380 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.379400 | orchestrator | 2025-05-13 23:43:36.379413 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-05-13 23:43:36.379426 | orchestrator | Tuesday 13 May 2025 23:32:14 +0000 (0:00:00.983) 0:00:05.902 *********** 2025-05-13 23:43:36.379438 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:43:36.379451 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:43:36.379464 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:43:36.379476 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.379488 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.379500 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.379513 | orchestrator | 2025-05-13 23:43:36.379526 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-05-13 23:43:36.379538 | orchestrator | Tuesday 13 May 2025 23:32:15 +0000 (0:00:00.985) 0:00:06.887 *********** 2025-05-13 23:43:36.379551 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:43:36.379563 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:43:36.379576 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:43:36.379588 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.379600 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.379613 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.379625 | orchestrator | 2025-05-13 23:43:36.379703 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-05-13 23:43:36.379725 | orchestrator | Tuesday 13 May 2025 23:32:16 +0000 (0:00:00.802) 0:00:07.689 *********** 2025-05-13 23:43:36.379745 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:43:36.379763 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:43:36.379781 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:43:36.379800 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.379812 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.379822 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.379833 | orchestrator | 2025-05-13 23:43:36.379844 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-05-13 23:43:36.379886 | orchestrator | Tuesday 13 May 2025 23:32:17 +0000 (0:00:00.767) 0:00:08.457 *********** 2025-05-13 23:43:36.379899 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.379912 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.379923 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.379945 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.379956 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.379966 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.379977 | orchestrator | 2025-05-13 23:43:36.379988 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-05-13 23:43:36.379998 | orchestrator | Tuesday 13 May 2025 23:32:18 +0000 (0:00:00.697) 0:00:09.154 *********** 2025-05-13 23:43:36.380009 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:43:36.380020 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:43:36.380030 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:43:36.380041 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.380051 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.380062 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.380072 | orchestrator | 2025-05-13 23:43:36.380083 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-05-13 23:43:36.380094 | orchestrator | Tuesday 13 May 2025 23:32:19 +0000 (0:00:00.939) 0:00:10.093 *********** 2025-05-13 23:43:36.380105 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-13 23:43:36.380116 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-13 23:43:36.380127 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-13 23:43:36.380137 | orchestrator | 2025-05-13 23:43:36.380148 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-05-13 23:43:36.380158 | orchestrator | Tuesday 13 May 2025 23:32:19 +0000 (0:00:00.720) 0:00:10.814 *********** 2025-05-13 23:43:36.380169 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:43:36.380179 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:43:36.380190 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:43:36.380200 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.380211 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.380221 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.380232 | orchestrator | 2025-05-13 23:43:36.380273 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-05-13 23:43:36.380285 | orchestrator | Tuesday 13 May 2025 23:32:21 +0000 (0:00:01.437) 0:00:12.252 *********** 2025-05-13 23:43:36.380296 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-13 23:43:36.380307 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-13 23:43:36.380317 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-13 23:43:36.380328 | orchestrator | 2025-05-13 23:43:36.380339 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-05-13 23:43:36.380349 | orchestrator | Tuesday 13 May 2025 23:32:24 +0000 (0:00:02.961) 0:00:15.213 *********** 2025-05-13 23:43:36.380361 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-13 23:43:36.380386 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-13 23:43:36.380397 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-13 23:43:36.380408 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.380419 | orchestrator | 2025-05-13 23:43:36.380429 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-05-13 23:43:36.380440 | orchestrator | Tuesday 13 May 2025 23:32:25 +0000 (0:00:00.881) 0:00:16.095 *********** 2025-05-13 23:43:36.380452 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.380489 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.380509 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.380520 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.380531 | orchestrator | 2025-05-13 23:43:36.380542 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-05-13 23:43:36.380553 | orchestrator | Tuesday 13 May 2025 23:32:26 +0000 (0:00:00.915) 0:00:17.010 *********** 2025-05-13 23:43:36.380565 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.380592 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.380610 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.380622 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.380633 | orchestrator | 2025-05-13 23:43:36.380671 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-05-13 23:43:36.380683 | orchestrator | Tuesday 13 May 2025 23:32:26 +0000 (0:00:00.145) 0:00:17.156 *********** 2025-05-13 23:43:36.380720 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-05-13 23:32:21.929253', 'end': '2025-05-13 23:32:22.179966', 'delta': '0:00:00.250713', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.380781 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-05-13 23:32:22.918181', 'end': '2025-05-13 23:32:23.220079', 'delta': '0:00:00.301898', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.380794 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-05-13 23:32:23.713666', 'end': '2025-05-13 23:32:24.038246', 'delta': '0:00:00.324580', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.380816 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.380827 | orchestrator | 2025-05-13 23:43:36.380838 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-05-13 23:43:36.380849 | orchestrator | Tuesday 13 May 2025 23:32:26 +0000 (0:00:00.177) 0:00:17.334 *********** 2025-05-13 23:43:36.380860 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:43:36.380870 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:43:36.380881 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:43:36.380892 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.380903 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.380913 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.380924 | orchestrator | 2025-05-13 23:43:36.380935 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-05-13 23:43:36.380946 | orchestrator | Tuesday 13 May 2025 23:32:27 +0000 (0:00:01.251) 0:00:18.585 *********** 2025-05-13 23:43:36.380956 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:43:36.380967 | orchestrator | 2025-05-13 23:43:36.380978 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-05-13 23:43:36.380988 | orchestrator | Tuesday 13 May 2025 23:32:28 +0000 (0:00:00.793) 0:00:19.379 *********** 2025-05-13 23:43:36.380999 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.381010 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.381021 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.381032 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.381043 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.381054 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.381064 | orchestrator | 2025-05-13 23:43:36.381075 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-05-13 23:43:36.381086 | orchestrator | Tuesday 13 May 2025 23:32:29 +0000 (0:00:01.531) 0:00:20.910 *********** 2025-05-13 23:43:36.381097 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.381107 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.381118 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.381128 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.381139 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.381149 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.381160 | orchestrator | 2025-05-13 23:43:36.381171 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-05-13 23:43:36.381182 | orchestrator | Tuesday 13 May 2025 23:32:31 +0000 (0:00:01.443) 0:00:22.354 *********** 2025-05-13 23:43:36.381193 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.381203 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.381213 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.381235 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.381246 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.381256 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.381267 | orchestrator | 2025-05-13 23:43:36.381277 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-05-13 23:43:36.381288 | orchestrator | Tuesday 13 May 2025 23:32:32 +0000 (0:00:00.972) 0:00:23.326 *********** 2025-05-13 23:43:36.381299 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.381309 | orchestrator | 2025-05-13 23:43:36.381320 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-05-13 23:43:36.381330 | orchestrator | Tuesday 13 May 2025 23:32:32 +0000 (0:00:00.120) 0:00:23.446 *********** 2025-05-13 23:43:36.381388 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.381399 | orchestrator | 2025-05-13 23:43:36.381410 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-05-13 23:43:36.381421 | orchestrator | Tuesday 13 May 2025 23:32:32 +0000 (0:00:00.236) 0:00:23.683 *********** 2025-05-13 23:43:36.381431 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.381442 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.381453 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.381464 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.381475 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.381485 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.381496 | orchestrator | 2025-05-13 23:43:36.381507 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-05-13 23:43:36.381624 | orchestrator | Tuesday 13 May 2025 23:32:33 +0000 (0:00:00.614) 0:00:24.297 *********** 2025-05-13 23:43:36.381673 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.381685 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.381696 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.381707 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.381717 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.381728 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.381739 | orchestrator | 2025-05-13 23:43:36.381750 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-05-13 23:43:36.381760 | orchestrator | Tuesday 13 May 2025 23:32:34 +0000 (0:00:00.905) 0:00:25.202 *********** 2025-05-13 23:43:36.381771 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.381781 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.381792 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.381803 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.381813 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.381824 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.381834 | orchestrator | 2025-05-13 23:43:36.381845 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-05-13 23:43:36.381856 | orchestrator | Tuesday 13 May 2025 23:32:35 +0000 (0:00:00.832) 0:00:26.035 *********** 2025-05-13 23:43:36.381866 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.381877 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.381887 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.381897 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.381908 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.381918 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.381929 | orchestrator | 2025-05-13 23:43:36.381940 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-05-13 23:43:36.381950 | orchestrator | Tuesday 13 May 2025 23:32:36 +0000 (0:00:01.090) 0:00:27.125 *********** 2025-05-13 23:43:36.381961 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.381971 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.381982 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.381992 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.382003 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.382013 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.382098 | orchestrator | 2025-05-13 23:43:36.382109 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-05-13 23:43:36.382120 | orchestrator | Tuesday 13 May 2025 23:32:36 +0000 (0:00:00.719) 0:00:27.845 *********** 2025-05-13 23:43:36.382130 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.382141 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.382151 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.382162 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.382172 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.382183 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.382193 | orchestrator | 2025-05-13 23:43:36.382230 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-05-13 23:43:36.382243 | orchestrator | Tuesday 13 May 2025 23:32:37 +0000 (0:00:00.752) 0:00:28.597 *********** 2025-05-13 23:43:36.382254 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.382265 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.382276 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.382286 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.382297 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.382307 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.382318 | orchestrator | 2025-05-13 23:43:36.382328 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-05-13 23:43:36.382339 | orchestrator | Tuesday 13 May 2025 23:32:38 +0000 (0:00:00.618) 0:00:29.216 *********** 2025-05-13 23:43:36.382351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 23:43:36.382369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 23:43:36.382381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 23:43:36.382392 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 23:43:36.382415 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 23:43:36.382427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 23:43:36.382438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 23:43:36.382450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 23:43:36.382502 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_14cb708c-4d88-41dd-af1a-38adc7d81bad', 'scsi-SQEMU_QEMU_HARDDISK_14cb708c-4d88-41dd-af1a-38adc7d81bad'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_14cb708c-4d88-41dd-af1a-38adc7d81bad-part1', 'scsi-SQEMU_QEMU_HARDDISK_14cb708c-4d88-41dd-af1a-38adc7d81bad-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_14cb708c-4d88-41dd-af1a-38adc7d81bad-part14', 'scsi-SQEMU_QEMU_HARDDISK_14cb708c-4d88-41dd-af1a-38adc7d81bad-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_14cb708c-4d88-41dd-af1a-38adc7d81bad-part15', 'scsi-SQEMU_QEMU_HARDDISK_14cb708c-4d88-41dd-af1a-38adc7d81bad-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_14cb708c-4d88-41dd-af1a-38adc7d81bad-part16', 'scsi-SQEMU_QEMU_HARDDISK_14cb708c-4d88-41dd-af1a-38adc7d81bad-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-13 23:43:36.382548 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 23:43:36.382562 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-13-22-38-25-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-13 23:43:36.382577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 23:43:36.382596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 23:43:36.382627 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 23:43:36.382667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 23:43:36.382685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 23:43:36.382709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 23:43:36.382727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 23:43:36.382761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7a0cda05-6059-4279-9091-38c6851dd1b0', 'scsi-SQEMU_QEMU_HARDDISK_7a0cda05-6059-4279-9091-38c6851dd1b0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7a0cda05-6059-4279-9091-38c6851dd1b0-part1', 'scsi-SQEMU_QEMU_HARDDISK_7a0cda05-6059-4279-9091-38c6851dd1b0-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7a0cda05-6059-4279-9091-38c6851dd1b0-part14', 'scsi-SQEMU_QEMU_HARDDISK_7a0cda05-6059-4279-9091-38c6851dd1b0-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7a0cda05-6059-4279-9091-38c6851dd1b0-part15', 'scsi-SQEMU_QEMU_HARDDISK_7a0cda05-6059-4279-9091-38c6851dd1b0-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7a0cda05-6059-4279-9091-38c6851dd1b0-part16', 'scsi-SQEMU_QEMU_HARDDISK_7a0cda05-6059-4279-9091-38c6851dd1b0-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-13 23:43:36.382794 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.382813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-13-22-38-32-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-13 23:43:36.382825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 23:43:36.382842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 23:43:36.382854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 23:43:36.382865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 23:43:36.382884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 23:43:36.382896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 23:43:36.382907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 23:43:36.382925 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.382936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 23:43:36.382953 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_742983f3-e890-4b21-9db6-0cea970b685b', 'scsi-SQEMU_QEMU_HARDDISK_742983f3-e890-4b21-9db6-0cea970b685b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_742983f3-e890-4b21-9db6-0cea970b685b-part1', 'scsi-SQEMU_QEMU_HARDDISK_742983f3-e890-4b21-9db6-0cea970b685b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_742983f3-e890-4b21-9db6-0cea970b685b-part14', 'scsi-SQEMU_QEMU_HARDDISK_742983f3-e890-4b21-9db6-0cea970b685b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_742983f3-e890-4b21-9db6-0cea970b685b-part15', 'scsi-SQEMU_QEMU_HARDDISK_742983f3-e890-4b21-9db6-0cea970b685b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_742983f3-e890-4b21-9db6-0cea970b685b-part16', 'scsi-SQEMU_QEMU_HARDDISK_742983f3-e890-4b21-9db6-0cea970b685b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-13 23:43:36.384357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-13-22-38-34-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-13 23:43:36.384467 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--cf553414--fd5b--54a4--812a--8e7012220720-osd--block--cf553414--fd5b--54a4--812a--8e7012220720', 'dm-uuid-LVM-1pX9WnHfeMT9nTIQouj7wiTl4rr7tArytNXgfJr31zE1gxodC69TGdXblsuHSIqw'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-13 23:43:36.384503 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.384518 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9ea6307c--c51b--54ed--aeb4--48fe7d66605c-osd--block--9ea6307c--c51b--54ed--aeb4--48fe7d66605c', 'dm-uuid-LVM-RSGUaRafehkiir5SfOds7jROuPzmjzWVxLyJIWSEcWCDPJyfdhiOuZzq4LK6qQoo'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-13 23:43:36.384529 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 23:43:36.384542 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 23:43:36.384552 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 23:43:36.384570 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 23:43:36.384581 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8f56c737--ae06--5042--be62--d4d7430a3913-osd--block--8f56c737--ae06--5042--be62--d4d7430a3913', 'dm-uuid-LVM-X31KRVqgJz32iEekGhM2Qq1k078Hw2qZdb03amgeAWfUc6Oza19mbyk8twnSEAIr'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-13 23:43:36.384666 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 23:43:36.384687 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b9ab4848--02bd--5b2a--a6cc--ded55503b6b3-osd--block--b9ab4848--02bd--5b2a--a6cc--ded55503b6b3', 'dm-uuid-LVM-4jWP9izaLLqkoflDNqUAXrWS6p6173C51LsIYNJBAT5kTNs3a3kKM70MQvfSKZft'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-13 23:43:36.384717 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 23:43:36.384735 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 23:43:36.384752 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 23:43:36.384764 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 23:43:36.384774 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 23:43:36.384789 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 23:43:36.384828 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c6453a8e-6632-42ad-a179-435c946212ec', 'scsi-SQEMU_QEMU_HARDDISK_c6453a8e-6632-42ad-a179-435c946212ec'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c6453a8e-6632-42ad-a179-435c946212ec-part1', 'scsi-SQEMU_QEMU_HARDDISK_c6453a8e-6632-42ad-a179-435c946212ec-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c6453a8e-6632-42ad-a179-435c946212ec-part14', 'scsi-SQEMU_QEMU_HARDDISK_c6453a8e-6632-42ad-a179-435c946212ec-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c6453a8e-6632-42ad-a179-435c946212ec-part15', 'scsi-SQEMU_QEMU_HARDDISK_c6453a8e-6632-42ad-a179-435c946212ec-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c6453a8e-6632-42ad-a179-435c946212ec-part16', 'scsi-SQEMU_QEMU_HARDDISK_c6453a8e-6632-42ad-a179-435c946212ec-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-13 23:43:36.384849 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 23:43:36.384872 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--cf553414--fd5b--54a4--812a--8e7012220720-osd--block--cf553414--fd5b--54a4--812a--8e7012220720'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-GFR53h-bjpN-LvAK-K4J7-1dHu-eaMe-SvdOns', 'scsi-0QEMU_QEMU_HARDDISK_2123f305-4e6b-4736-99ab-18aaa07aaf45', 'scsi-SQEMU_QEMU_HARDDISK_2123f305-4e6b-4736-99ab-18aaa07aaf45'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-13 23:43:36.384889 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--9ea6307c--c51b--54ed--aeb4--48fe7d66605c-osd--block--9ea6307c--c51b--54ed--aeb4--48fe7d66605c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-YRX48y-tAU9-6MkF-cnzG-Gs1X-DKt5-tiM1Jb', 'scsi-0QEMU_QEMU_HARDDISK_46243ec1-9f30-4dd7-b280-49f134625000', 'scsi-SQEMU_QEMU_HARDDISK_46243ec1-9f30-4dd7-b280-49f134625000'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-13 23:43:36.384900 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 23:43:36.385016 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_213ab59a-cb73-4407-9705-0b2ca8256438', 'scsi-SQEMU_QEMU_HARDDISK_213ab59a-cb73-4407-9705-0b2ca8256438'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-13 23:43:36.385046 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--53cfcf66--6862--5829--a71b--dc902cfbd9df-osd--block--53cfcf66--6862--5829--a71b--dc902cfbd9df', 'dm-uuid-LVM-u04ANOtmOdGz1Vzl9h6jqIKzRS7efN642z7ZMI1f66JIrWUs8jF7PnqjXXBvMoRy'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-13 23:43:36.385058 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 23:43:36.385070 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-13-22-38-28-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-13 23:43:36.385081 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d153f4c4--5597--54b4--b460--41e490b92c19-osd--block--d153f4c4--5597--54b4--b460--41e490b92c19', 'dm-uuid-LVM-PYU5eiYmArZZx9l0IRv7NkCQeLmEUpEudrGIxN3Awr1GUIw1Dw6FjNk2029z1Y9Y'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-13 23:43:36.385092 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.385104 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 23:43:36.385120 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 23:43:36.385132 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 23:43:36.385149 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 23:43:36.385167 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 23:43:36.385180 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_318ab0b7-de56-4f87-ab50-209f607532c7', 'scsi-SQEMU_QEMU_HARDDISK_318ab0b7-de56-4f87-ab50-209f607532c7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_318ab0b7-de56-4f87-ab50-209f607532c7-part1', 'scsi-SQEMU_QEMU_HARDDISK_318ab0b7-de56-4f87-ab50-209f607532c7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_318ab0b7-de56-4f87-ab50-209f607532c7-part14', 'scsi-SQEMU_QEMU_HARDDISK_318ab0b7-de56-4f87-ab50-209f607532c7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_318ab0b7-de56-4f87-ab50-209f607532c7-part15', 'scsi-SQEMU_QEMU_HARDDISK_318ab0b7-de56-4f87-ab50-209f607532c7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_318ab0b7-de56-4f87-ab50-209f607532c7-part16', 'scsi-SQEMU_QEMU_HARDDISK_318ab0b7-de56-4f87-ab50-209f607532c7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-13 23:43:36.385198 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 23:43:36.385210 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--8f56c737--ae06--5042--be62--d4d7430a3913-osd--block--8f56c737--ae06--5042--be62--d4d7430a3913'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-kKUXcV-NRc8-Te46-jGWo-Ip4f-DlWw-6i6xRr', 'scsi-0QEMU_QEMU_HARDDISK_c475673a-0096-49dd-a2ab-dba7e6677c05', 'scsi-SQEMU_QEMU_HARDDISK_c475673a-0096-49dd-a2ab-dba7e6677c05'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-13 23:43:36.386821 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--b9ab4848--02bd--5b2a--a6cc--ded55503b6b3-osd--block--b9ab4848--02bd--5b2a--a6cc--ded55503b6b3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-iALSnE-fged-l0II-iQ2Q-DplQ-iluv-DkubK5', 'scsi-0QEMU_QEMU_HARDDISK_a5357627-6c2a-405a-984b-26b28125b648', 'scsi-SQEMU_QEMU_HARDDISK_a5357627-6c2a-405a-984b-26b28125b648'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-13 23:43:36.386852 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 23:43:36.386862 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0156a383-42b8-4f65-bebb-758e8d549677', 'scsi-SQEMU_QEMU_HARDDISK_0156a383-42b8-4f65-bebb-758e8d549677'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-13 23:43:36.386871 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 23:43:36.386880 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-13-22-38-30-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-13 23:43:36.386889 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.386905 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 23:43:36.386914 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 23:43:36.386932 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b255196f-0cab-4746-bd7d-248a31197f78', 'scsi-SQEMU_QEMU_HARDDISK_b255196f-0cab-4746-bd7d-248a31197f78'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b255196f-0cab-4746-bd7d-248a31197f78-part1', 'scsi-SQEMU_QEMU_HARDDISK_b255196f-0cab-4746-bd7d-248a31197f78-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b255196f-0cab-4746-bd7d-248a31197f78-part14', 'scsi-SQEMU_QEMU_HARDDISK_b255196f-0cab-4746-bd7d-248a31197f78-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b255196f-0cab-4746-bd7d-248a31197f78-part15', 'scsi-SQEMU_QEMU_HARDDISK_b255196f-0cab-4746-bd7d-248a31197f78-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b255196f-0cab-4746-bd7d-248a31197f78-part16', 'scsi-SQEMU_QEMU_HARDDISK_b255196f-0cab-4746-bd7d-248a31197f78-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-13 23:43:36.386964 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--53cfcf66--6862--5829--a71b--dc902cfbd9df-osd--block--53cfcf66--6862--5829--a71b--dc902cfbd9df'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-rHhLav-nIRm-kwul-12gR-Y0i1-rO5X-mga0H8', 'scsi-0QEMU_QEMU_HARDDISK_61dae38b-1d40-412d-9df6-8d9734e6ced8', 'scsi-SQEMU_QEMU_HARDDISK_61dae38b-1d40-412d-9df6-8d9734e6ced8'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-13 23:43:36.386977 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--d153f4c4--5597--54b4--b460--41e490b92c19-osd--block--d153f4c4--5597--54b4--b460--41e490b92c19'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-zRfxmE-geHX-KaCf-Tjbv-h6oW-e94U-M8FcSh', 'scsi-0QEMU_QEMU_HARDDISK_0aeac9b9-4df2-4d9e-975e-68588115061e', 'scsi-SQEMU_QEMU_HARDDISK_0aeac9b9-4df2-4d9e-975e-68588115061e'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-13 23:43:36.386986 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_55ed4948-9fe5-49ab-9e57-6f6f508ce8e3', 'scsi-SQEMU_QEMU_HARDDISK_55ed4948-9fe5-49ab-9e57-6f6f508ce8e3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-13 23:43:36.387005 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-13-22-38-26-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-13 23:43:36.387014 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.387022 | orchestrator | 2025-05-13 23:43:36.387031 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-05-13 23:43:36.387040 | orchestrator | Tuesday 13 May 2025 23:32:40 +0000 (0:00:01.818) 0:00:31.035 *********** 2025-05-13 23:43:36.387049 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.387059 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.387067 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.387076 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.387088 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.387114 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.387129 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.387138 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.387146 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.387154 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.387166 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.387180 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.387228 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_14cb708c-4d88-41dd-af1a-38adc7d81bad', 'scsi-SQEMU_QEMU_HARDDISK_14cb708c-4d88-41dd-af1a-38adc7d81bad'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_14cb708c-4d88-41dd-af1a-38adc7d81bad-part1', 'scsi-SQEMU_QEMU_HARDDISK_14cb708c-4d88-41dd-af1a-38adc7d81bad-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_14cb708c-4d88-41dd-af1a-38adc7d81bad-part14', 'scsi-SQEMU_QEMU_HARDDISK_14cb708c-4d88-41dd-af1a-38adc7d81bad-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_14cb708c-4d88-41dd-af1a-38adc7d81bad-part15', 'scsi-SQEMU_QEMU_HARDDISK_14cb708c-4d88-41dd-af1a-38adc7d81bad-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_14cb708c-4d88-41dd-af1a-38adc7d81bad-part16', 'scsi-SQEMU_QEMU_HARDDISK_14cb708c-4d88-41dd-af1a-38adc7d81bad-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.387239 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.387252 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-13-22-38-25-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.387290 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.387306 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.387315 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.387332 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7a0cda05-6059-4279-9091-38c6851dd1b0', 'scsi-SQEMU_QEMU_HARDDISK_7a0cda05-6059-4279-9091-38c6851dd1b0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7a0cda05-6059-4279-9091-38c6851dd1b0-part1', 'scsi-SQEMU_QEMU_HARDDISK_7a0cda05-6059-4279-9091-38c6851dd1b0-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7a0cda05-6059-4279-9091-38c6851dd1b0-part14', 'scsi-SQEMU_QEMU_HARDDISK_7a0cda05-6059-4279-9091-38c6851dd1b0-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7a0cda05-6059-4279-9091-38c6851dd1b0-part15', 'scsi-SQEMU_QEMU_HARDDISK_7a0cda05-6059-4279-9091-38c6851dd1b0-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7a0cda05-6059-4279-9091-38c6851dd1b0-part16', 'scsi-SQEMU_QEMU_HARDDISK_7a0cda05-6059-4279-9091-38c6851dd1b0-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.387356 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-13-22-38-32-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.387365 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.387434 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.387444 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.387452 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.387460 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.387473 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.387488 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.387497 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.387510 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.387519 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.387531 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_742983f3-e890-4b21-9db6-0cea970b685b', 'scsi-SQEMU_QEMU_HARDDISK_742983f3-e890-4b21-9db6-0cea970b685b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_742983f3-e890-4b21-9db6-0cea970b685b-part1', 'scsi-SQEMU_QEMU_HARDDISK_742983f3-e890-4b21-9db6-0cea970b685b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_742983f3-e890-4b21-9db6-0cea970b685b-part14', 'scsi-SQEMU_QEMU_HARDDISK_742983f3-e890-4b21-9db6-0cea970b685b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_742983f3-e890-4b21-9db6-0cea970b685b-part15', 'scsi-SQEMU_QEMU_HARDDISK_742983f3-e890-4b21-9db6-0cea970b685b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_742983f3-e890-4b21-9db6-0cea970b685b-part16', 'scsi-SQEMU_QEMU_HARDDISK_742983f3-e890-4b21-9db6-0cea970b685b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.387557 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-13-22-38-34-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.387579 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--cf553414--fd5b--54a4--812a--8e7012220720-osd--block--cf553414--fd5b--54a4--812a--8e7012220720', 'dm-uuid-LVM-1pX9WnHfeMT9nTIQouj7wiTl4rr7tArytNXgfJr31zE1gxodC69TGdXblsuHSIqw'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.387592 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9ea6307c--c51b--54ed--aeb4--48fe7d66605c-osd--block--9ea6307c--c51b--54ed--aeb4--48fe7d66605c', 'dm-uuid-LVM-RSGUaRafehkiir5SfOds7jROuPzmjzWVxLyJIWSEcWCDPJyfdhiOuZzq4LK6qQoo'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.387607 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.387622 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.387675 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.387724 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.387735 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.387762 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.387771 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.387779 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.387788 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.387825 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c6453a8e-6632-42ad-a179-435c946212ec', 'scsi-SQEMU_QEMU_HARDDISK_c6453a8e-6632-42ad-a179-435c946212ec'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c6453a8e-6632-42ad-a179-435c946212ec-part1', 'scsi-SQEMU_QEMU_HARDDISK_c6453a8e-6632-42ad-a179-435c946212ec-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c6453a8e-6632-42ad-a179-435c946212ec-part14', 'scsi-SQEMU_QEMU_HARDDISK_c6453a8e-6632-42ad-a179-435c946212ec-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c6453a8e-6632-42ad-a179-435c946212ec-part15', 'scsi-SQEMU_QEMU_HARDDISK_c6453a8e-6632-42ad-a179-435c946212ec-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c6453a8e-6632-42ad-a179-435c946212ec-part16', 'scsi-SQEMU_QEMU_HARDDISK_c6453a8e-6632-42ad-a179-435c946212ec-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.387847 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--cf553414--fd5b--54a4--812a--8e7012220720-osd--block--cf553414--fd5b--54a4--812a--8e7012220720'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-GFR53h-bjpN-LvAK-K4J7-1dHu-eaMe-SvdOns', 'scsi-0QEMU_QEMU_HARDDISK_2123f305-4e6b-4736-99ab-18aaa07aaf45', 'scsi-SQEMU_QEMU_HARDDISK_2123f305-4e6b-4736-99ab-18aaa07aaf45'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.387856 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--9ea6307c--c51b--54ed--aeb4--48fe7d66605c-osd--block--9ea6307c--c51b--54ed--aeb4--48fe7d66605c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-YRX48y-tAU9-6MkF-cnzG-Gs1X-DKt5-tiM1Jb', 'scsi-0QEMU_QEMU_HARDDISK_46243ec1-9f30-4dd7-b280-49f134625000', 'scsi-SQEMU_QEMU_HARDDISK_46243ec1-9f30-4dd7-b280-49f134625000'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.387874 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_213ab59a-cb73-4407-9705-0b2ca8256438', 'scsi-SQEMU_QEMU_HARDDISK_213ab59a-cb73-4407-9705-0b2ca8256438'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.387883 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8f56c737--ae06--5042--be62--d4d7430a3913-osd--block--8f56c737--ae06--5042--be62--d4d7430a3913', 'dm-uuid-LVM-X31KRVqgJz32iEekGhM2Qq1k078Hw2qZdb03amgeAWfUc6Oza19mbyk8twnSEAIr'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.387896 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-13-22-38-28-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.387904 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.387913 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b9ab4848--02bd--5b2a--a6cc--ded55503b6b3-osd--block--b9ab4848--02bd--5b2a--a6cc--ded55503b6b3', 'dm-uuid-LVM-4jWP9izaLLqkoflDNqUAXrWS6p6173C51LsIYNJBAT5kTNs3a3kKM70MQvfSKZft'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.387921 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.387935 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.387947 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.387955 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.387975 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--53cfcf66--6862--5829--a71b--dc902cfbd9df-osd--block--53cfcf66--6862--5829--a71b--dc902cfbd9df', 'dm-uuid-LVM-u04ANOtmOdGz1Vzl9h6jqIKzRS7efN642z7ZMI1f66JIrWUs8jF7PnqjXXBvMoRy'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.387983 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.387992 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.388006 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d153f4c4--5597--54b4--b460--41e490b92c19-osd--block--d153f4c4--5597--54b4--b460--41e490b92c19', 'dm-uuid-LVM-PYU5eiYmArZZx9l0IRv7NkCQeLmEUpEudrGIxN3Awr1GUIw1Dw6FjNk2029z1Y9Y'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.388023 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.388031 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.388045 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.388054 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_318ab0b7-de56-4f87-ab50-209f607532c7', 'scsi-SQEMU_QEMU_HARDDISK_318ab0b7-de56-4f87-ab50-209f607532c7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_318ab0b7-de56-4f87-ab50-209f607532c7-part1', 'scsi-SQEMU_QEMU_HARDDISK_318ab0b7-de56-4f87-ab50-209f607532c7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_318ab0b7-de56-4f87-ab50-209f607532c7-part14', 'scsi-SQEMU_QEMU_HARDDISK_318ab0b7-de56-4f87-ab50-209f607532c7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_318ab0b7-de56-4f87-ab50-209f607532c7-part15', 'scsi-SQEMU_QEMU_HARDDISK_318ab0b7-de56-4f87-ab50-209f607532c7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_318ab0b7-de56-4f87-ab50-209f607532c7-part16', 'scsi-SQEMU_QEMU_HARDDISK_318ab0b7-de56-4f87-ab50-209f607532c7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.388137 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--8f56c737--ae06--5042--be62--d4d7430a3913-osd--block--8f56c737--ae06--5042--be62--d4d7430a3913'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-kKUXcV-NRc8-Te46-jGWo-Ip4f-DlWw-6i6xRr', 'scsi-0QEMU_QEMU_HARDDISK_c475673a-0096-49dd-a2ab-dba7e6677c05', 'scsi-SQEMU_QEMU_HARDDISK_c475673a-0096-49dd-a2ab-dba7e6677c05'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.388153 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--b9ab4848--02bd--5b2a--a6cc--ded55503b6b3-osd--block--b9ab4848--02bd--5b2a--a6cc--ded55503b6b3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-iALSnE-fged-l0II-iQ2Q-DplQ-iluv-DkubK5', 'scsi-0QEMU_QEMU_HARDDISK_a5357627-6c2a-405a-984b-26b28125b648', 'scsi-SQEMU_QEMU_HARDDISK_a5357627-6c2a-405a-984b-26b28125b648'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.388162 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.388170 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0156a383-42b8-4f65-bebb-758e8d549677', 'scsi-SQEMU_QEMU_HARDDISK_0156a383-42b8-4f65-bebb-758e8d549677'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.388184 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-13-22-38-30-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.388196 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.388205 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.388213 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.388225 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.388233 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.388242 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.388255 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.388273 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b255196f-0cab-4746-bd7d-248a31197f78', 'scsi-SQEMU_QEMU_HARDDISK_b255196f-0cab-4746-bd7d-248a31197f78'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b255196f-0cab-4746-bd7d-248a31197f78-part1', 'scsi-SQEMU_QEMU_HARDDISK_b255196f-0cab-4746-bd7d-248a31197f78-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b255196f-0cab-4746-bd7d-248a31197f78-part14', 'scsi-SQEMU_QEMU_HARDDISK_b255196f-0cab-4746-bd7d-248a31197f78-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b255196f-0cab-4746-bd7d-248a31197f78-part15', 'scsi-SQEMU_QEMU_HARDDISK_b255196f-0cab-4746-bd7d-248a31197f78-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b255196f-0cab-4746-bd7d-248a31197f78-part16', 'scsi-SQEMU_QEMU_HARDDISK_b255196f-0cab-4746-bd7d-248a31197f78-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.388283 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--53cfcf66--6862--5829--a71b--dc902cfbd9df-osd--block--53cfcf66--6862--5829--a71b--dc902cfbd9df'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-rHhLav-nIRm-kwul-12gR-Y0i1-rO5X-mga0H8', 'scsi-0QEMU_QEMU_HARDDISK_61dae38b-1d40-412d-9df6-8d9734e6ced8', 'scsi-SQEMU_QEMU_HARDDISK_61dae38b-1d40-412d-9df6-8d9734e6ced8'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.388297 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--d153f4c4--5597--54b4--b460--41e490b92c19-osd--block--d153f4c4--5597--54b4--b460--41e490b92c19'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-zRfxmE-geHX-KaCf-Tjbv-h6oW-e94U-M8FcSh', 'scsi-0QEMU_QEMU_HARDDISK_0aeac9b9-4df2-4d9e-975e-68588115061e', 'scsi-SQEMU_QEMU_HARDDISK_0aeac9b9-4df2-4d9e-975e-68588115061e'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.388310 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_55ed4948-9fe5-49ab-9e57-6f6f508ce8e3', 'scsi-SQEMU_QEMU_HARDDISK_55ed4948-9fe5-49ab-9e57-6f6f508ce8e3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.388332 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-13-22-38-26-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:43:36.388341 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.388349 | orchestrator | 2025-05-13 23:43:36.388357 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-05-13 23:43:36.388366 | orchestrator | Tuesday 13 May 2025 23:32:41 +0000 (0:00:01.600) 0:00:32.635 *********** 2025-05-13 23:43:36.388374 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:43:36.388382 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:43:36.388390 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:43:36.388402 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.388410 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.388418 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.388426 | orchestrator | 2025-05-13 23:43:36.388434 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-05-13 23:43:36.388442 | orchestrator | Tuesday 13 May 2025 23:32:42 +0000 (0:00:01.062) 0:00:33.697 *********** 2025-05-13 23:43:36.388450 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:43:36.388458 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:43:36.388465 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:43:36.388473 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.388486 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.388494 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.388502 | orchestrator | 2025-05-13 23:43:36.388510 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-05-13 23:43:36.388518 | orchestrator | Tuesday 13 May 2025 23:32:43 +0000 (0:00:00.598) 0:00:34.296 *********** 2025-05-13 23:43:36.388525 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.388533 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.388541 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.388549 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.388557 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.388565 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.388573 | orchestrator | 2025-05-13 23:43:36.388581 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-05-13 23:43:36.388588 | orchestrator | Tuesday 13 May 2025 23:32:44 +0000 (0:00:01.124) 0:00:35.420 *********** 2025-05-13 23:43:36.388596 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.388604 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.388612 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.388619 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.388629 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.388696 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.388711 | orchestrator | 2025-05-13 23:43:36.388724 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-05-13 23:43:36.388735 | orchestrator | Tuesday 13 May 2025 23:32:45 +0000 (0:00:00.655) 0:00:36.075 *********** 2025-05-13 23:43:36.388743 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.388751 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.388759 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.388766 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.388774 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.388781 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.388789 | orchestrator | 2025-05-13 23:43:36.388797 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-05-13 23:43:36.388804 | orchestrator | Tuesday 13 May 2025 23:32:45 +0000 (0:00:00.888) 0:00:36.964 *********** 2025-05-13 23:43:36.388812 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.388820 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.388827 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.388835 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.388842 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.388850 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.388857 | orchestrator | 2025-05-13 23:43:36.388865 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-05-13 23:43:36.388873 | orchestrator | Tuesday 13 May 2025 23:32:47 +0000 (0:00:01.056) 0:00:38.021 *********** 2025-05-13 23:43:36.388880 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-05-13 23:43:36.388889 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-05-13 23:43:36.388896 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-05-13 23:43:36.388904 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-05-13 23:43:36.388911 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-13 23:43:36.388919 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-05-13 23:43:36.388927 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-05-13 23:43:36.388934 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-05-13 23:43:36.388942 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-05-13 23:43:36.388950 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-05-13 23:43:36.388958 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-05-13 23:43:36.388970 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-05-13 23:43:36.388978 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-05-13 23:43:36.389009 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-05-13 23:43:36.389017 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-05-13 23:43:36.389025 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-05-13 23:43:36.389032 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-05-13 23:43:36.389040 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-05-13 23:43:36.389048 | orchestrator | 2025-05-13 23:43:36.389055 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-05-13 23:43:36.389063 | orchestrator | Tuesday 13 May 2025 23:32:50 +0000 (0:00:02.990) 0:00:41.012 *********** 2025-05-13 23:43:36.389071 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-13 23:43:36.389079 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-13 23:43:36.389087 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-13 23:43:36.389094 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.389102 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-13 23:43:36.389110 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-13 23:43:36.389117 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-13 23:43:36.389125 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.389133 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-13 23:43:36.389141 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-13 23:43:36.389148 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-13 23:43:36.389156 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-13 23:43:36.389175 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-13 23:43:36.389184 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-13 23:43:36.389191 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.389199 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-13 23:43:36.389207 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-13 23:43:36.389215 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-13 23:43:36.389222 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.389230 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.389238 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-13 23:43:36.389245 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-13 23:43:36.389252 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-13 23:43:36.389258 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.389265 | orchestrator | 2025-05-13 23:43:36.389272 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-05-13 23:43:36.389278 | orchestrator | Tuesday 13 May 2025 23:32:51 +0000 (0:00:01.728) 0:00:42.741 *********** 2025-05-13 23:43:36.389285 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.389292 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.389298 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.389305 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:43:36.389312 | orchestrator | 2025-05-13 23:43:36.389319 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-13 23:43:36.389327 | orchestrator | Tuesday 13 May 2025 23:32:53 +0000 (0:00:01.517) 0:00:44.259 *********** 2025-05-13 23:43:36.389334 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.389340 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.389347 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.389354 | orchestrator | 2025-05-13 23:43:36.389360 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-13 23:43:36.389367 | orchestrator | Tuesday 13 May 2025 23:32:53 +0000 (0:00:00.702) 0:00:44.962 *********** 2025-05-13 23:43:36.389378 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.389385 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.389392 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.389398 | orchestrator | 2025-05-13 23:43:36.389405 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-13 23:43:36.389411 | orchestrator | Tuesday 13 May 2025 23:32:54 +0000 (0:00:00.912) 0:00:45.874 *********** 2025-05-13 23:43:36.389418 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.389425 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.389431 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.389438 | orchestrator | 2025-05-13 23:43:36.389444 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-05-13 23:43:36.389451 | orchestrator | Tuesday 13 May 2025 23:32:55 +0000 (0:00:00.413) 0:00:46.288 *********** 2025-05-13 23:43:36.389458 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.389464 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.389471 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.389477 | orchestrator | 2025-05-13 23:43:36.389484 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-05-13 23:43:36.389491 | orchestrator | Tuesday 13 May 2025 23:32:55 +0000 (0:00:00.431) 0:00:46.719 *********** 2025-05-13 23:43:36.389497 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-13 23:43:36.389504 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-13 23:43:36.389510 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-13 23:43:36.389517 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.389524 | orchestrator | 2025-05-13 23:43:36.389530 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-13 23:43:36.389537 | orchestrator | Tuesday 13 May 2025 23:32:56 +0000 (0:00:00.380) 0:00:47.099 *********** 2025-05-13 23:43:36.389543 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-13 23:43:36.389553 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-13 23:43:36.389560 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-13 23:43:36.389566 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.389573 | orchestrator | 2025-05-13 23:43:36.389579 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-13 23:43:36.389586 | orchestrator | Tuesday 13 May 2025 23:32:56 +0000 (0:00:00.463) 0:00:47.563 *********** 2025-05-13 23:43:36.389593 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-13 23:43:36.389599 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-13 23:43:36.389606 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-13 23:43:36.389612 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.389619 | orchestrator | 2025-05-13 23:43:36.389626 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-05-13 23:43:36.389632 | orchestrator | Tuesday 13 May 2025 23:32:57 +0000 (0:00:00.766) 0:00:48.329 *********** 2025-05-13 23:43:36.389655 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.389667 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.389674 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.389681 | orchestrator | 2025-05-13 23:43:36.389687 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-05-13 23:43:36.389694 | orchestrator | Tuesday 13 May 2025 23:32:57 +0000 (0:00:00.586) 0:00:48.916 *********** 2025-05-13 23:43:36.389701 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-05-13 23:43:36.389708 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-05-13 23:43:36.389714 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-05-13 23:43:36.389721 | orchestrator | 2025-05-13 23:43:36.389731 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-05-13 23:43:36.389742 | orchestrator | Tuesday 13 May 2025 23:32:58 +0000 (0:00:00.663) 0:00:49.579 *********** 2025-05-13 23:43:36.389763 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-13 23:43:36.389786 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-13 23:43:36.389797 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-13 23:43:36.389807 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-05-13 23:43:36.389818 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-13 23:43:36.389828 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-13 23:43:36.389838 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-13 23:43:36.389848 | orchestrator | 2025-05-13 23:43:36.389859 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-05-13 23:43:36.389870 | orchestrator | Tuesday 13 May 2025 23:32:59 +0000 (0:00:01.400) 0:00:50.980 *********** 2025-05-13 23:43:36.389881 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-13 23:43:36.389892 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-13 23:43:36.389903 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-13 23:43:36.389914 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-05-13 23:43:36.389925 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-13 23:43:36.389932 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-13 23:43:36.389938 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-13 23:43:36.389945 | orchestrator | 2025-05-13 23:43:36.389952 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-05-13 23:43:36.389958 | orchestrator | Tuesday 13 May 2025 23:33:03 +0000 (0:00:03.141) 0:00:54.122 *********** 2025-05-13 23:43:36.389965 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:43:36.389987 | orchestrator | 2025-05-13 23:43:36.389994 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-05-13 23:43:36.390001 | orchestrator | Tuesday 13 May 2025 23:33:04 +0000 (0:00:01.570) 0:00:55.692 *********** 2025-05-13 23:43:36.390008 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:43:36.390041 | orchestrator | 2025-05-13 23:43:36.390049 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-05-13 23:43:36.390056 | orchestrator | Tuesday 13 May 2025 23:33:06 +0000 (0:00:01.679) 0:00:57.372 *********** 2025-05-13 23:43:36.390063 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:43:36.390071 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:43:36.390077 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.390084 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:43:36.390091 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.390098 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.390104 | orchestrator | 2025-05-13 23:43:36.390112 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-05-13 23:43:36.390118 | orchestrator | Tuesday 13 May 2025 23:33:07 +0000 (0:00:01.212) 0:00:58.584 *********** 2025-05-13 23:43:36.390125 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.390132 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.390138 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.390145 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.390151 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.390158 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.390164 | orchestrator | 2025-05-13 23:43:36.390179 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-05-13 23:43:36.390192 | orchestrator | Tuesday 13 May 2025 23:33:09 +0000 (0:00:01.836) 0:01:00.420 *********** 2025-05-13 23:43:36.390198 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.390205 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.390211 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.390218 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.390224 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.390231 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.390237 | orchestrator | 2025-05-13 23:43:36.390244 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-05-13 23:43:36.390251 | orchestrator | Tuesday 13 May 2025 23:33:10 +0000 (0:00:01.369) 0:01:01.789 *********** 2025-05-13 23:43:36.390257 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.390264 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.390270 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.390277 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.390283 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.390290 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.390297 | orchestrator | 2025-05-13 23:43:36.390303 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-05-13 23:43:36.390310 | orchestrator | Tuesday 13 May 2025 23:33:11 +0000 (0:00:01.180) 0:01:02.970 *********** 2025-05-13 23:43:36.390316 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:43:36.390323 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.390329 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:43:36.390336 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:43:36.390343 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.390349 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.390356 | orchestrator | 2025-05-13 23:43:36.390362 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-05-13 23:43:36.390369 | orchestrator | Tuesday 13 May 2025 23:33:12 +0000 (0:00:00.931) 0:01:03.901 *********** 2025-05-13 23:43:36.390386 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.390393 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.390400 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.390407 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.390414 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.390420 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.390427 | orchestrator | 2025-05-13 23:43:36.390434 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-05-13 23:43:36.390440 | orchestrator | Tuesday 13 May 2025 23:33:13 +0000 (0:00:00.781) 0:01:04.683 *********** 2025-05-13 23:43:36.390447 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.390454 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.390460 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.390467 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.390473 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.390480 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.390486 | orchestrator | 2025-05-13 23:43:36.390493 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-05-13 23:43:36.390500 | orchestrator | Tuesday 13 May 2025 23:33:14 +0000 (0:00:00.926) 0:01:05.611 *********** 2025-05-13 23:43:36.390506 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:43:36.390513 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:43:36.390519 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:43:36.390526 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.390532 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.390539 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.390545 | orchestrator | 2025-05-13 23:43:36.390552 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-05-13 23:43:36.390558 | orchestrator | Tuesday 13 May 2025 23:33:16 +0000 (0:00:01.932) 0:01:07.544 *********** 2025-05-13 23:43:36.390565 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:43:36.390572 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:43:36.390584 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:43:36.390591 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.390597 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.390604 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.390610 | orchestrator | 2025-05-13 23:43:36.390617 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-05-13 23:43:36.390624 | orchestrator | Tuesday 13 May 2025 23:33:18 +0000 (0:00:01.656) 0:01:09.200 *********** 2025-05-13 23:43:36.390631 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.390658 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.390666 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.390672 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.390679 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.390686 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.390710 | orchestrator | 2025-05-13 23:43:36.390717 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-05-13 23:43:36.390724 | orchestrator | Tuesday 13 May 2025 23:33:18 +0000 (0:00:00.776) 0:01:09.977 *********** 2025-05-13 23:43:36.390731 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:43:36.390737 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:43:36.390744 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:43:36.390751 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.390757 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.390764 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.390770 | orchestrator | 2025-05-13 23:43:36.390777 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-05-13 23:43:36.390783 | orchestrator | Tuesday 13 May 2025 23:33:20 +0000 (0:00:01.631) 0:01:11.608 *********** 2025-05-13 23:43:36.390790 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.390796 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.390803 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.390809 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.390816 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.390822 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.390829 | orchestrator | 2025-05-13 23:43:36.390835 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-05-13 23:43:36.390842 | orchestrator | Tuesday 13 May 2025 23:33:21 +0000 (0:00:01.182) 0:01:12.791 *********** 2025-05-13 23:43:36.390848 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.390855 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.390862 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.390869 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.390875 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.390882 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.390888 | orchestrator | 2025-05-13 23:43:36.390899 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-05-13 23:43:36.390906 | orchestrator | Tuesday 13 May 2025 23:33:23 +0000 (0:00:01.836) 0:01:14.628 *********** 2025-05-13 23:43:36.390913 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.390920 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.390926 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.390933 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.390939 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.390946 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.390952 | orchestrator | 2025-05-13 23:43:36.390959 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-05-13 23:43:36.390966 | orchestrator | Tuesday 13 May 2025 23:33:24 +0000 (0:00:00.791) 0:01:15.419 *********** 2025-05-13 23:43:36.390972 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.390979 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.390985 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.390992 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.390998 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.391005 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.391033 | orchestrator | 2025-05-13 23:43:36.391040 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-05-13 23:43:36.391047 | orchestrator | Tuesday 13 May 2025 23:33:25 +0000 (0:00:00.995) 0:01:16.414 *********** 2025-05-13 23:43:36.391054 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.391061 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.391068 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.391074 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.391081 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.391088 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.391094 | orchestrator | 2025-05-13 23:43:36.391101 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-05-13 23:43:36.391113 | orchestrator | Tuesday 13 May 2025 23:33:26 +0000 (0:00:00.683) 0:01:17.098 *********** 2025-05-13 23:43:36.391120 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:43:36.391127 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:43:36.391133 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:43:36.391140 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.391146 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.391153 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.391159 | orchestrator | 2025-05-13 23:43:36.391166 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-05-13 23:43:36.391173 | orchestrator | Tuesday 13 May 2025 23:33:27 +0000 (0:00:01.044) 0:01:18.143 *********** 2025-05-13 23:43:36.391179 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:43:36.391186 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:43:36.391192 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:43:36.391199 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.391205 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.391212 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.391218 | orchestrator | 2025-05-13 23:43:36.391225 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-05-13 23:43:36.391232 | orchestrator | Tuesday 13 May 2025 23:33:28 +0000 (0:00:00.941) 0:01:19.084 *********** 2025-05-13 23:43:36.391238 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:43:36.391245 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:43:36.391251 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:43:36.391258 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.391264 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.391271 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.391278 | orchestrator | 2025-05-13 23:43:36.391285 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2025-05-13 23:43:36.391291 | orchestrator | Tuesday 13 May 2025 23:33:29 +0000 (0:00:01.513) 0:01:20.598 *********** 2025-05-13 23:43:36.391298 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:43:36.391305 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:43:36.391312 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:43:36.391318 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:43:36.391325 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:43:36.391331 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:43:36.391338 | orchestrator | 2025-05-13 23:43:36.391345 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2025-05-13 23:43:36.391351 | orchestrator | Tuesday 13 May 2025 23:33:31 +0000 (0:00:01.942) 0:01:22.540 *********** 2025-05-13 23:43:36.391358 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:43:36.391364 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:43:36.391371 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:43:36.391377 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:43:36.391384 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:43:36.391391 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:43:36.391397 | orchestrator | 2025-05-13 23:43:36.391404 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2025-05-13 23:43:36.391410 | orchestrator | Tuesday 13 May 2025 23:33:33 +0000 (0:00:02.022) 0:01:24.563 *********** 2025-05-13 23:43:36.391418 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:43:36.391430 | orchestrator | 2025-05-13 23:43:36.391437 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2025-05-13 23:43:36.391444 | orchestrator | Tuesday 13 May 2025 23:33:34 +0000 (0:00:01.223) 0:01:25.786 *********** 2025-05-13 23:43:36.391450 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.391457 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.391463 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.391470 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.391477 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.391483 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.391490 | orchestrator | 2025-05-13 23:43:36.391496 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2025-05-13 23:43:36.391505 | orchestrator | Tuesday 13 May 2025 23:33:35 +0000 (0:00:00.903) 0:01:26.690 *********** 2025-05-13 23:43:36.391517 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.391529 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.391541 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.391552 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.391564 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.391580 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.391592 | orchestrator | 2025-05-13 23:43:36.391604 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2025-05-13 23:43:36.391616 | orchestrator | Tuesday 13 May 2025 23:33:36 +0000 (0:00:00.591) 0:01:27.282 *********** 2025-05-13 23:43:36.391628 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-13 23:43:36.391692 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-13 23:43:36.391701 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-13 23:43:36.391708 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-13 23:43:36.391714 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-13 23:43:36.391721 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-13 23:43:36.391728 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-13 23:43:36.391735 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-13 23:43:36.391741 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-13 23:43:36.391748 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-13 23:43:36.391755 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-13 23:43:36.391761 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-13 23:43:36.391768 | orchestrator | 2025-05-13 23:43:36.391786 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2025-05-13 23:43:36.391793 | orchestrator | Tuesday 13 May 2025 23:33:38 +0000 (0:00:01.726) 0:01:29.009 *********** 2025-05-13 23:43:36.391800 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:43:36.391807 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:43:36.391813 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:43:36.391820 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:43:36.391827 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:43:36.391834 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:43:36.391840 | orchestrator | 2025-05-13 23:43:36.391847 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2025-05-13 23:43:36.391854 | orchestrator | Tuesday 13 May 2025 23:33:39 +0000 (0:00:00.998) 0:01:30.007 *********** 2025-05-13 23:43:36.391860 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.391874 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.391881 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.391888 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.391894 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.391901 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.391908 | orchestrator | 2025-05-13 23:43:36.391915 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2025-05-13 23:43:36.391921 | orchestrator | Tuesday 13 May 2025 23:33:39 +0000 (0:00:00.660) 0:01:30.667 *********** 2025-05-13 23:43:36.391928 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.391935 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.391942 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.391948 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.391955 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.391961 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.391968 | orchestrator | 2025-05-13 23:43:36.391975 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2025-05-13 23:43:36.391982 | orchestrator | Tuesday 13 May 2025 23:33:40 +0000 (0:00:00.557) 0:01:31.225 *********** 2025-05-13 23:43:36.391988 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.391995 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.392002 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.392008 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.392015 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.392022 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.392032 | orchestrator | 2025-05-13 23:43:36.392045 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2025-05-13 23:43:36.392062 | orchestrator | Tuesday 13 May 2025 23:33:40 +0000 (0:00:00.744) 0:01:31.969 *********** 2025-05-13 23:43:36.392072 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:43:36.392082 | orchestrator | 2025-05-13 23:43:36.392092 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2025-05-13 23:43:36.392102 | orchestrator | Tuesday 13 May 2025 23:33:42 +0000 (0:00:01.348) 0:01:33.318 *********** 2025-05-13 23:43:36.392111 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.392120 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:43:36.392131 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.392142 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.392153 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:43:36.392164 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:43:36.392174 | orchestrator | 2025-05-13 23:43:36.392184 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2025-05-13 23:43:36.392193 | orchestrator | Tuesday 13 May 2025 23:34:57 +0000 (0:01:15.139) 0:02:48.457 *********** 2025-05-13 23:43:36.392199 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-13 23:43:36.392205 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-13 23:43:36.392212 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-13 23:43:36.392218 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.392224 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-13 23:43:36.392236 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-13 23:43:36.392243 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-13 23:43:36.392249 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.392255 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-13 23:43:36.392261 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-13 23:43:36.392267 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-13 23:43:36.392280 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.392286 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-13 23:43:36.392292 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-13 23:43:36.392298 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-13 23:43:36.392304 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.392310 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-13 23:43:36.392317 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-13 23:43:36.392323 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-13 23:43:36.392329 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.392335 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-13 23:43:36.392341 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-13 23:43:36.392347 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-13 23:43:36.392360 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.392367 | orchestrator | 2025-05-13 23:43:36.392373 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2025-05-13 23:43:36.392380 | orchestrator | Tuesday 13 May 2025 23:34:58 +0000 (0:00:01.028) 0:02:49.486 *********** 2025-05-13 23:43:36.392386 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.392392 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.392399 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.392405 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.392411 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.392417 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.392423 | orchestrator | 2025-05-13 23:43:36.392430 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2025-05-13 23:43:36.392436 | orchestrator | Tuesday 13 May 2025 23:34:59 +0000 (0:00:00.661) 0:02:50.148 *********** 2025-05-13 23:43:36.392442 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.392448 | orchestrator | 2025-05-13 23:43:36.392455 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2025-05-13 23:43:36.392461 | orchestrator | Tuesday 13 May 2025 23:34:59 +0000 (0:00:00.150) 0:02:50.298 *********** 2025-05-13 23:43:36.392467 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.392473 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.392479 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.392485 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.392492 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.392498 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.392504 | orchestrator | 2025-05-13 23:43:36.392511 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2025-05-13 23:43:36.392517 | orchestrator | Tuesday 13 May 2025 23:35:00 +0000 (0:00:01.354) 0:02:51.652 *********** 2025-05-13 23:43:36.392523 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.392530 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.392536 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.392542 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.392548 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.392555 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.392561 | orchestrator | 2025-05-13 23:43:36.392567 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2025-05-13 23:43:36.392574 | orchestrator | Tuesday 13 May 2025 23:35:01 +0000 (0:00:00.859) 0:02:52.512 *********** 2025-05-13 23:43:36.392580 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.392586 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.392592 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.392599 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.392605 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.392616 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.392623 | orchestrator | 2025-05-13 23:43:36.392629 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2025-05-13 23:43:36.392635 | orchestrator | Tuesday 13 May 2025 23:35:02 +0000 (0:00:00.777) 0:02:53.289 *********** 2025-05-13 23:43:36.392662 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:43:36.392668 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.392674 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.392681 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:43:36.392687 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:43:36.392693 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.392700 | orchestrator | 2025-05-13 23:43:36.392706 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2025-05-13 23:43:36.392712 | orchestrator | Tuesday 13 May 2025 23:35:04 +0000 (0:00:02.353) 0:02:55.642 *********** 2025-05-13 23:43:36.392718 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:43:36.392724 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:43:36.392731 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:43:36.392737 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.392743 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.392749 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.392755 | orchestrator | 2025-05-13 23:43:36.392761 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2025-05-13 23:43:36.392768 | orchestrator | Tuesday 13 May 2025 23:35:05 +0000 (0:00:00.792) 0:02:56.435 *********** 2025-05-13 23:43:36.392774 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:43:36.392796 | orchestrator | 2025-05-13 23:43:36.392807 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2025-05-13 23:43:36.392813 | orchestrator | Tuesday 13 May 2025 23:35:06 +0000 (0:00:01.134) 0:02:57.569 *********** 2025-05-13 23:43:36.392819 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.392826 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.392832 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.392838 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.392844 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.392850 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.392856 | orchestrator | 2025-05-13 23:43:36.392863 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2025-05-13 23:43:36.392869 | orchestrator | Tuesday 13 May 2025 23:35:07 +0000 (0:00:00.706) 0:02:58.276 *********** 2025-05-13 23:43:36.392875 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.392881 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.392887 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.392893 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.392900 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.392906 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.392913 | orchestrator | 2025-05-13 23:43:36.392919 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2025-05-13 23:43:36.392925 | orchestrator | Tuesday 13 May 2025 23:35:08 +0000 (0:00:00.966) 0:02:59.242 *********** 2025-05-13 23:43:36.392932 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.392938 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.392944 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.392950 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.392956 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.392962 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.392969 | orchestrator | 2025-05-13 23:43:36.392985 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2025-05-13 23:43:36.392991 | orchestrator | Tuesday 13 May 2025 23:35:08 +0000 (0:00:00.633) 0:02:59.875 *********** 2025-05-13 23:43:36.392998 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.393004 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.393017 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.393023 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.393030 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.393036 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.393042 | orchestrator | 2025-05-13 23:43:36.393048 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2025-05-13 23:43:36.393054 | orchestrator | Tuesday 13 May 2025 23:35:09 +0000 (0:00:00.980) 0:03:00.856 *********** 2025-05-13 23:43:36.393060 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.393066 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.393073 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.393079 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.393086 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.393092 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.393098 | orchestrator | 2025-05-13 23:43:36.393104 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2025-05-13 23:43:36.393111 | orchestrator | Tuesday 13 May 2025 23:35:10 +0000 (0:00:00.718) 0:03:01.575 *********** 2025-05-13 23:43:36.393117 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.393123 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.393129 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.393135 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.393141 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.393147 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.393153 | orchestrator | 2025-05-13 23:43:36.393159 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2025-05-13 23:43:36.393166 | orchestrator | Tuesday 13 May 2025 23:35:11 +0000 (0:00:00.892) 0:03:02.467 *********** 2025-05-13 23:43:36.393172 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.393178 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.393185 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.393191 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.393197 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.393203 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.393209 | orchestrator | 2025-05-13 23:43:36.393216 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2025-05-13 23:43:36.393222 | orchestrator | Tuesday 13 May 2025 23:35:12 +0000 (0:00:00.759) 0:03:03.226 *********** 2025-05-13 23:43:36.393228 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.393235 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.393241 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.393247 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.393253 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.393259 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.393265 | orchestrator | 2025-05-13 23:43:36.393272 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2025-05-13 23:43:36.393279 | orchestrator | Tuesday 13 May 2025 23:35:13 +0000 (0:00:00.858) 0:03:04.085 *********** 2025-05-13 23:43:36.393285 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:43:36.393291 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:43:36.393297 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:43:36.393303 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.393309 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.393315 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.393322 | orchestrator | 2025-05-13 23:43:36.393328 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2025-05-13 23:43:36.393334 | orchestrator | Tuesday 13 May 2025 23:35:14 +0000 (0:00:01.280) 0:03:05.366 *********** 2025-05-13 23:43:36.393340 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:43:36.393347 | orchestrator | 2025-05-13 23:43:36.393353 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2025-05-13 23:43:36.393376 | orchestrator | Tuesday 13 May 2025 23:35:15 +0000 (0:00:01.372) 0:03:06.739 *********** 2025-05-13 23:43:36.393382 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-05-13 23:43:36.393389 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-05-13 23:43:36.393396 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-05-13 23:43:36.393402 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-05-13 23:43:36.393408 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-05-13 23:43:36.393414 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-05-13 23:43:36.393420 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-05-13 23:43:36.393426 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-05-13 23:43:36.393432 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-05-13 23:43:36.393438 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-05-13 23:43:36.393444 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-05-13 23:43:36.393451 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-05-13 23:43:36.393457 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-05-13 23:43:36.393464 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-05-13 23:43:36.393470 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-05-13 23:43:36.393476 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-05-13 23:43:36.393482 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-05-13 23:43:36.393550 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-05-13 23:43:36.393568 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-05-13 23:43:36.393574 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-05-13 23:43:36.393581 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-05-13 23:43:36.393595 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-05-13 23:43:36.393602 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-05-13 23:43:36.393608 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-05-13 23:43:36.393614 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-05-13 23:43:36.393620 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-05-13 23:43:36.393626 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-05-13 23:43:36.393632 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-05-13 23:43:36.393661 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-05-13 23:43:36.393672 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-05-13 23:43:36.393683 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-05-13 23:43:36.393693 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-05-13 23:43:36.393703 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-05-13 23:43:36.393710 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-05-13 23:43:36.393716 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-05-13 23:43:36.393722 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2025-05-13 23:43:36.393728 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2025-05-13 23:43:36.393734 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2025-05-13 23:43:36.393741 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2025-05-13 23:43:36.393747 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-05-13 23:43:36.393753 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2025-05-13 23:43:36.393759 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-05-13 23:43:36.393765 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-05-13 23:43:36.393792 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-05-13 23:43:36.393799 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-05-13 23:43:36.393805 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2025-05-13 23:43:36.393811 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-05-13 23:43:36.393817 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-13 23:43:36.393823 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-13 23:43:36.393829 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-13 23:43:36.393835 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-13 23:43:36.393841 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-13 23:43:36.393847 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-13 23:43:36.393853 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-05-13 23:43:36.393859 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-13 23:43:36.393865 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-13 23:43:36.393871 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-13 23:43:36.393877 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-13 23:43:36.393883 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-13 23:43:36.393889 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-13 23:43:36.393895 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-13 23:43:36.393901 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-13 23:43:36.393913 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-13 23:43:36.393919 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-13 23:43:36.393925 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-13 23:43:36.393931 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-13 23:43:36.393937 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-13 23:43:36.393943 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-13 23:43:36.393949 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-13 23:43:36.393955 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-13 23:43:36.393961 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-13 23:43:36.393967 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-13 23:43:36.393973 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-13 23:43:36.393979 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-13 23:43:36.393985 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-13 23:43:36.393991 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-13 23:43:36.393997 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-13 23:43:36.394003 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-13 23:43:36.394009 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-13 23:43:36.394052 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-13 23:43:36.394060 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-13 23:43:36.394066 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-13 23:43:36.394084 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-13 23:43:36.394091 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-05-13 23:43:36.394097 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-05-13 23:43:36.394103 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-05-13 23:43:36.394109 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-05-13 23:43:36.394115 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-05-13 23:43:36.394122 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-05-13 23:43:36.394128 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-13 23:43:36.394134 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-05-13 23:43:36.394140 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-05-13 23:43:36.394146 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-05-13 23:43:36.394152 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-05-13 23:43:36.394158 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-05-13 23:43:36.394164 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-05-13 23:43:36.394170 | orchestrator | 2025-05-13 23:43:36.394177 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-13 23:43:36.394183 | orchestrator | Tuesday 13 May 2025 23:35:22 +0000 (0:00:06.746) 0:03:13.485 *********** 2025-05-13 23:43:36.394189 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.394195 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.394201 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.394208 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:43:36.394214 | orchestrator | 2025-05-13 23:43:36.394221 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2025-05-13 23:43:36.394227 | orchestrator | Tuesday 13 May 2025 23:35:23 +0000 (0:00:01.050) 0:03:14.535 *********** 2025-05-13 23:43:36.394233 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-05-13 23:43:36.394240 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-05-13 23:43:36.394246 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-05-13 23:43:36.394252 | orchestrator | 2025-05-13 23:43:36.394258 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2025-05-13 23:43:36.394264 | orchestrator | Tuesday 13 May 2025 23:35:24 +0000 (0:00:00.695) 0:03:15.231 *********** 2025-05-13 23:43:36.394270 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-05-13 23:43:36.394276 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-05-13 23:43:36.394282 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-05-13 23:43:36.394288 | orchestrator | 2025-05-13 23:43:36.394295 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2025-05-13 23:43:36.394301 | orchestrator | Tuesday 13 May 2025 23:35:25 +0000 (0:00:01.705) 0:03:16.936 *********** 2025-05-13 23:43:36.394312 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.394318 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.394324 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.394330 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.394336 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.394342 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.394348 | orchestrator | 2025-05-13 23:43:36.394366 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2025-05-13 23:43:36.394377 | orchestrator | Tuesday 13 May 2025 23:35:26 +0000 (0:00:00.685) 0:03:17.622 *********** 2025-05-13 23:43:36.394387 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.394399 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.394409 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.394421 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.394432 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.394441 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.394447 | orchestrator | 2025-05-13 23:43:36.394453 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2025-05-13 23:43:36.394459 | orchestrator | Tuesday 13 May 2025 23:35:27 +0000 (0:00:01.036) 0:03:18.659 *********** 2025-05-13 23:43:36.394466 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.394472 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.394478 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.394484 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.394490 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.394496 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.394502 | orchestrator | 2025-05-13 23:43:36.394508 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2025-05-13 23:43:36.394515 | orchestrator | Tuesday 13 May 2025 23:35:28 +0000 (0:00:00.784) 0:03:19.443 *********** 2025-05-13 23:43:36.394521 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.394527 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.394544 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.394550 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.394556 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.394562 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.394568 | orchestrator | 2025-05-13 23:43:36.394575 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2025-05-13 23:43:36.394581 | orchestrator | Tuesday 13 May 2025 23:35:29 +0000 (0:00:00.868) 0:03:20.312 *********** 2025-05-13 23:43:36.394587 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.394594 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.394600 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.394606 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.394612 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.394618 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.394624 | orchestrator | 2025-05-13 23:43:36.394630 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-13 23:43:36.394651 | orchestrator | Tuesday 13 May 2025 23:35:30 +0000 (0:00:00.776) 0:03:21.088 *********** 2025-05-13 23:43:36.394659 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.394665 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.394671 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.394677 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.394683 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.394689 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.394695 | orchestrator | 2025-05-13 23:43:36.394702 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-13 23:43:36.394708 | orchestrator | Tuesday 13 May 2025 23:35:31 +0000 (0:00:01.111) 0:03:22.200 *********** 2025-05-13 23:43:36.394714 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.394720 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.394726 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.394733 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.394739 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.394745 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.394751 | orchestrator | 2025-05-13 23:43:36.394757 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-13 23:43:36.394770 | orchestrator | Tuesday 13 May 2025 23:35:31 +0000 (0:00:00.763) 0:03:22.963 *********** 2025-05-13 23:43:36.394777 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.394783 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.394789 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.394795 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.394801 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.394807 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.394813 | orchestrator | 2025-05-13 23:43:36.394819 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-13 23:43:36.394825 | orchestrator | Tuesday 13 May 2025 23:35:32 +0000 (0:00:00.859) 0:03:23.823 *********** 2025-05-13 23:43:36.394831 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.394838 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.394844 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.394850 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.394856 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.394862 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.394868 | orchestrator | 2025-05-13 23:43:36.394875 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2025-05-13 23:43:36.394881 | orchestrator | Tuesday 13 May 2025 23:35:36 +0000 (0:00:03.532) 0:03:27.355 *********** 2025-05-13 23:43:36.394887 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.394893 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.394899 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.394905 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.394911 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.394917 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.394923 | orchestrator | 2025-05-13 23:43:36.394929 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2025-05-13 23:43:36.394936 | orchestrator | Tuesday 13 May 2025 23:35:37 +0000 (0:00:00.989) 0:03:28.344 *********** 2025-05-13 23:43:36.394942 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.394948 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.394954 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.394960 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.394971 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.394978 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.394984 | orchestrator | 2025-05-13 23:43:36.394990 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2025-05-13 23:43:36.394996 | orchestrator | Tuesday 13 May 2025 23:35:38 +0000 (0:00:00.804) 0:03:29.149 *********** 2025-05-13 23:43:36.395002 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.395009 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.395015 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.395021 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.395027 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.395033 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.395039 | orchestrator | 2025-05-13 23:43:36.395045 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2025-05-13 23:43:36.395052 | orchestrator | Tuesday 13 May 2025 23:35:39 +0000 (0:00:01.058) 0:03:30.208 *********** 2025-05-13 23:43:36.395058 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.395064 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.395070 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.395077 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-05-13 23:43:36.395083 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-05-13 23:43:36.395089 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-05-13 23:43:36.395095 | orchestrator | 2025-05-13 23:43:36.395102 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2025-05-13 23:43:36.395121 | orchestrator | Tuesday 13 May 2025 23:35:40 +0000 (0:00:00.807) 0:03:31.016 *********** 2025-05-13 23:43:36.395128 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.395134 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.395140 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.395148 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2025-05-13 23:43:36.395157 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2025-05-13 23:43:36.395164 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.395171 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2025-05-13 23:43:36.395178 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2025-05-13 23:43:36.395184 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.395191 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2025-05-13 23:43:36.395197 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2025-05-13 23:43:36.395204 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.395210 | orchestrator | 2025-05-13 23:43:36.395216 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2025-05-13 23:43:36.395222 | orchestrator | Tuesday 13 May 2025 23:35:41 +0000 (0:00:01.048) 0:03:32.064 *********** 2025-05-13 23:43:36.395228 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.395234 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.395240 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.395246 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.395253 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.395259 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.395265 | orchestrator | 2025-05-13 23:43:36.395271 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2025-05-13 23:43:36.395277 | orchestrator | Tuesday 13 May 2025 23:35:41 +0000 (0:00:00.766) 0:03:32.830 *********** 2025-05-13 23:43:36.395283 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.395289 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.395295 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.395306 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.395312 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.395318 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.395324 | orchestrator | 2025-05-13 23:43:36.395330 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-13 23:43:36.395359 | orchestrator | Tuesday 13 May 2025 23:35:42 +0000 (0:00:00.916) 0:03:33.747 *********** 2025-05-13 23:43:36.395366 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.395372 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.395378 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.395384 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.395390 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.395396 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.395402 | orchestrator | 2025-05-13 23:43:36.395408 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-13 23:43:36.395415 | orchestrator | Tuesday 13 May 2025 23:35:43 +0000 (0:00:00.746) 0:03:34.494 *********** 2025-05-13 23:43:36.395421 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.395427 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.395433 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.395439 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.395445 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.395451 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.395457 | orchestrator | 2025-05-13 23:43:36.395463 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-13 23:43:36.395469 | orchestrator | Tuesday 13 May 2025 23:35:44 +0000 (0:00:00.870) 0:03:35.364 *********** 2025-05-13 23:43:36.395476 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.395482 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.395488 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.395500 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.395506 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.395512 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.395518 | orchestrator | 2025-05-13 23:43:36.395524 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-05-13 23:43:36.395530 | orchestrator | Tuesday 13 May 2025 23:35:44 +0000 (0:00:00.590) 0:03:35.954 *********** 2025-05-13 23:43:36.395536 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.395542 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.395548 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.395554 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.395560 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.395566 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.395572 | orchestrator | 2025-05-13 23:43:36.395579 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-05-13 23:43:36.395585 | orchestrator | Tuesday 13 May 2025 23:35:46 +0000 (0:00:01.072) 0:03:37.026 *********** 2025-05-13 23:43:36.395591 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-13 23:43:36.395598 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-13 23:43:36.395604 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-13 23:43:36.395610 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.395616 | orchestrator | 2025-05-13 23:43:36.395622 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-13 23:43:36.395628 | orchestrator | Tuesday 13 May 2025 23:35:46 +0000 (0:00:00.429) 0:03:37.456 *********** 2025-05-13 23:43:36.395634 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-13 23:43:36.395681 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-13 23:43:36.395688 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-13 23:43:36.395694 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.395700 | orchestrator | 2025-05-13 23:43:36.395706 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-13 23:43:36.395712 | orchestrator | Tuesday 13 May 2025 23:35:46 +0000 (0:00:00.384) 0:03:37.841 *********** 2025-05-13 23:43:36.395718 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-13 23:43:36.395724 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-13 23:43:36.395736 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-13 23:43:36.395742 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.395748 | orchestrator | 2025-05-13 23:43:36.395754 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-05-13 23:43:36.395761 | orchestrator | Tuesday 13 May 2025 23:35:47 +0000 (0:00:00.416) 0:03:38.257 *********** 2025-05-13 23:43:36.395767 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.395773 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.395779 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.395785 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.395791 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.395798 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.395804 | orchestrator | 2025-05-13 23:43:36.395810 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-05-13 23:43:36.395816 | orchestrator | Tuesday 13 May 2025 23:35:47 +0000 (0:00:00.698) 0:03:38.956 *********** 2025-05-13 23:43:36.395822 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-13 23:43:36.395828 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.395834 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-13 23:43:36.395840 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.395847 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-13 23:43:36.395853 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.395859 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-05-13 23:43:36.395865 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-05-13 23:43:36.395871 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-05-13 23:43:36.395877 | orchestrator | 2025-05-13 23:43:36.395884 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2025-05-13 23:43:36.395890 | orchestrator | Tuesday 13 May 2025 23:35:50 +0000 (0:00:02.162) 0:03:41.118 *********** 2025-05-13 23:43:36.395896 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:43:36.395902 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:43:36.395908 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:43:36.395919 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:43:36.395925 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:43:36.395931 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:43:36.395937 | orchestrator | 2025-05-13 23:43:36.395943 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-05-13 23:43:36.395949 | orchestrator | Tuesday 13 May 2025 23:35:53 +0000 (0:00:03.215) 0:03:44.334 *********** 2025-05-13 23:43:36.395955 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:43:36.395962 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:43:36.395968 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:43:36.395974 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:43:36.395980 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:43:36.395986 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:43:36.395992 | orchestrator | 2025-05-13 23:43:36.395998 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-05-13 23:43:36.396005 | orchestrator | Tuesday 13 May 2025 23:35:54 +0000 (0:00:01.236) 0:03:45.570 *********** 2025-05-13 23:43:36.396011 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.396017 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.396023 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.396029 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:43:36.396036 | orchestrator | 2025-05-13 23:43:36.396042 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-05-13 23:43:36.396048 | orchestrator | Tuesday 13 May 2025 23:35:55 +0000 (0:00:01.083) 0:03:46.654 *********** 2025-05-13 23:43:36.396054 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:43:36.396060 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:43:36.396066 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:43:36.396072 | orchestrator | 2025-05-13 23:43:36.396096 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-05-13 23:43:36.396107 | orchestrator | Tuesday 13 May 2025 23:35:55 +0000 (0:00:00.295) 0:03:46.950 *********** 2025-05-13 23:43:36.396113 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:43:36.396120 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:43:36.396126 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:43:36.396132 | orchestrator | 2025-05-13 23:43:36.396138 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-05-13 23:43:36.396144 | orchestrator | Tuesday 13 May 2025 23:35:57 +0000 (0:00:01.386) 0:03:48.336 *********** 2025-05-13 23:43:36.396150 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-13 23:43:36.396156 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-13 23:43:36.396162 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-13 23:43:36.396168 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.396174 | orchestrator | 2025-05-13 23:43:36.396181 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-05-13 23:43:36.396190 | orchestrator | Tuesday 13 May 2025 23:35:58 +0000 (0:00:00.748) 0:03:49.085 *********** 2025-05-13 23:43:36.396198 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:43:36.396207 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:43:36.396216 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:43:36.396226 | orchestrator | 2025-05-13 23:43:36.396235 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-05-13 23:43:36.396245 | orchestrator | Tuesday 13 May 2025 23:35:58 +0000 (0:00:00.445) 0:03:49.530 *********** 2025-05-13 23:43:36.396254 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.396264 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.396272 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.396282 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:43:36.396290 | orchestrator | 2025-05-13 23:43:36.396299 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-05-13 23:43:36.396308 | orchestrator | Tuesday 13 May 2025 23:35:59 +0000 (0:00:01.200) 0:03:50.731 *********** 2025-05-13 23:43:36.396317 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-13 23:43:36.396325 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-13 23:43:36.396335 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-13 23:43:36.396344 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.396353 | orchestrator | 2025-05-13 23:43:36.396359 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-05-13 23:43:36.396365 | orchestrator | Tuesday 13 May 2025 23:36:00 +0000 (0:00:00.482) 0:03:51.214 *********** 2025-05-13 23:43:36.396370 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.396376 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.396381 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.396386 | orchestrator | 2025-05-13 23:43:36.396391 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-05-13 23:43:36.396397 | orchestrator | Tuesday 13 May 2025 23:36:00 +0000 (0:00:00.420) 0:03:51.634 *********** 2025-05-13 23:43:36.396402 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.396407 | orchestrator | 2025-05-13 23:43:36.396412 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-05-13 23:43:36.396418 | orchestrator | Tuesday 13 May 2025 23:36:00 +0000 (0:00:00.308) 0:03:51.943 *********** 2025-05-13 23:43:36.396423 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.396428 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.396434 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.396439 | orchestrator | 2025-05-13 23:43:36.396444 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-05-13 23:43:36.396450 | orchestrator | Tuesday 13 May 2025 23:36:01 +0000 (0:00:00.404) 0:03:52.347 *********** 2025-05-13 23:43:36.396464 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.396470 | orchestrator | 2025-05-13 23:43:36.396475 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-05-13 23:43:36.396480 | orchestrator | Tuesday 13 May 2025 23:36:01 +0000 (0:00:00.360) 0:03:52.708 *********** 2025-05-13 23:43:36.396485 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.396491 | orchestrator | 2025-05-13 23:43:36.396500 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-05-13 23:43:36.396507 | orchestrator | Tuesday 13 May 2025 23:36:02 +0000 (0:00:00.844) 0:03:53.552 *********** 2025-05-13 23:43:36.396515 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.396524 | orchestrator | 2025-05-13 23:43:36.396535 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-05-13 23:43:36.396551 | orchestrator | Tuesday 13 May 2025 23:36:02 +0000 (0:00:00.123) 0:03:53.676 *********** 2025-05-13 23:43:36.396560 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.396568 | orchestrator | 2025-05-13 23:43:36.396576 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-05-13 23:43:36.396585 | orchestrator | Tuesday 13 May 2025 23:36:02 +0000 (0:00:00.228) 0:03:53.905 *********** 2025-05-13 23:43:36.396593 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.396602 | orchestrator | 2025-05-13 23:43:36.396610 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-05-13 23:43:36.396618 | orchestrator | Tuesday 13 May 2025 23:36:03 +0000 (0:00:00.234) 0:03:54.139 *********** 2025-05-13 23:43:36.396627 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-13 23:43:36.396635 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-13 23:43:36.396662 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-13 23:43:36.396671 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.396680 | orchestrator | 2025-05-13 23:43:36.396688 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-05-13 23:43:36.396696 | orchestrator | Tuesday 13 May 2025 23:36:03 +0000 (0:00:00.456) 0:03:54.595 *********** 2025-05-13 23:43:36.396704 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.396713 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.396718 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.396724 | orchestrator | 2025-05-13 23:43:36.396740 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-05-13 23:43:36.396746 | orchestrator | Tuesday 13 May 2025 23:36:03 +0000 (0:00:00.379) 0:03:54.975 *********** 2025-05-13 23:43:36.396751 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.396757 | orchestrator | 2025-05-13 23:43:36.396762 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-05-13 23:43:36.396767 | orchestrator | Tuesday 13 May 2025 23:36:04 +0000 (0:00:00.266) 0:03:55.241 *********** 2025-05-13 23:43:36.396773 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.396778 | orchestrator | 2025-05-13 23:43:36.396784 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-05-13 23:43:36.396789 | orchestrator | Tuesday 13 May 2025 23:36:04 +0000 (0:00:00.248) 0:03:55.490 *********** 2025-05-13 23:43:36.396794 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.396800 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.396805 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.396811 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:43:36.396816 | orchestrator | 2025-05-13 23:43:36.396822 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-05-13 23:43:36.396827 | orchestrator | Tuesday 13 May 2025 23:36:05 +0000 (0:00:01.329) 0:03:56.820 *********** 2025-05-13 23:43:36.396832 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.396838 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.396843 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.396849 | orchestrator | 2025-05-13 23:43:36.396861 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-05-13 23:43:36.396866 | orchestrator | Tuesday 13 May 2025 23:36:06 +0000 (0:00:00.450) 0:03:57.270 *********** 2025-05-13 23:43:36.396871 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:43:36.396877 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:43:36.396882 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:43:36.396887 | orchestrator | 2025-05-13 23:43:36.396893 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-05-13 23:43:36.396898 | orchestrator | Tuesday 13 May 2025 23:36:07 +0000 (0:00:01.511) 0:03:58.782 *********** 2025-05-13 23:43:36.396903 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-13 23:43:36.396908 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-13 23:43:36.396914 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-13 23:43:36.396919 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.396924 | orchestrator | 2025-05-13 23:43:36.396930 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-05-13 23:43:36.396935 | orchestrator | Tuesday 13 May 2025 23:36:08 +0000 (0:00:01.182) 0:03:59.964 *********** 2025-05-13 23:43:36.396940 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.396946 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.396951 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.396956 | orchestrator | 2025-05-13 23:43:36.396961 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-05-13 23:43:36.396967 | orchestrator | Tuesday 13 May 2025 23:36:09 +0000 (0:00:00.448) 0:04:00.413 *********** 2025-05-13 23:43:36.396972 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.396977 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.396983 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.396988 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:43:36.396993 | orchestrator | 2025-05-13 23:43:36.396999 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-05-13 23:43:36.397004 | orchestrator | Tuesday 13 May 2025 23:36:10 +0000 (0:00:01.301) 0:04:01.715 *********** 2025-05-13 23:43:36.397009 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.397015 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.397020 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.397025 | orchestrator | 2025-05-13 23:43:36.397031 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-05-13 23:43:36.397036 | orchestrator | Tuesday 13 May 2025 23:36:11 +0000 (0:00:00.454) 0:04:02.170 *********** 2025-05-13 23:43:36.397041 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:43:36.397047 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:43:36.397056 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:43:36.397062 | orchestrator | 2025-05-13 23:43:36.397067 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-05-13 23:43:36.397072 | orchestrator | Tuesday 13 May 2025 23:36:12 +0000 (0:00:01.334) 0:04:03.504 *********** 2025-05-13 23:43:36.397078 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-13 23:43:36.397083 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-13 23:43:36.397088 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-13 23:43:36.397094 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.397099 | orchestrator | 2025-05-13 23:43:36.397104 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-05-13 23:43:36.397110 | orchestrator | Tuesday 13 May 2025 23:36:13 +0000 (0:00:00.712) 0:04:04.216 *********** 2025-05-13 23:43:36.397115 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.397121 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.397126 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.397131 | orchestrator | 2025-05-13 23:43:36.397136 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2025-05-13 23:43:36.397146 | orchestrator | Tuesday 13 May 2025 23:36:13 +0000 (0:00:00.292) 0:04:04.508 *********** 2025-05-13 23:43:36.397152 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.397157 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.397162 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.397168 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.397173 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.397178 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.397183 | orchestrator | 2025-05-13 23:43:36.397189 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-05-13 23:43:36.397194 | orchestrator | Tuesday 13 May 2025 23:36:14 +0000 (0:00:00.851) 0:04:05.360 *********** 2025-05-13 23:43:36.397204 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.397210 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.397215 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.397220 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:43:36.397228 | orchestrator | 2025-05-13 23:43:36.397236 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-05-13 23:43:36.397245 | orchestrator | Tuesday 13 May 2025 23:36:15 +0000 (0:00:00.938) 0:04:06.299 *********** 2025-05-13 23:43:36.397253 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:43:36.397263 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:43:36.397269 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:43:36.397274 | orchestrator | 2025-05-13 23:43:36.397279 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-05-13 23:43:36.397285 | orchestrator | Tuesday 13 May 2025 23:36:15 +0000 (0:00:00.308) 0:04:06.607 *********** 2025-05-13 23:43:36.397290 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:43:36.397296 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:43:36.397301 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:43:36.397306 | orchestrator | 2025-05-13 23:43:36.397312 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-05-13 23:43:36.397317 | orchestrator | Tuesday 13 May 2025 23:36:16 +0000 (0:00:01.234) 0:04:07.842 *********** 2025-05-13 23:43:36.397322 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-13 23:43:36.397328 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-13 23:43:36.397333 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-13 23:43:36.397338 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.397344 | orchestrator | 2025-05-13 23:43:36.397349 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-05-13 23:43:36.397354 | orchestrator | Tuesday 13 May 2025 23:36:17 +0000 (0:00:00.778) 0:04:08.621 *********** 2025-05-13 23:43:36.397360 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:43:36.397365 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:43:36.397371 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:43:36.397376 | orchestrator | 2025-05-13 23:43:36.397381 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-05-13 23:43:36.397386 | orchestrator | 2025-05-13 23:43:36.397392 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-05-13 23:43:36.397397 | orchestrator | Tuesday 13 May 2025 23:36:18 +0000 (0:00:00.657) 0:04:09.278 *********** 2025-05-13 23:43:36.397402 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:43:36.397408 | orchestrator | 2025-05-13 23:43:36.397413 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-05-13 23:43:36.397418 | orchestrator | Tuesday 13 May 2025 23:36:18 +0000 (0:00:00.431) 0:04:09.710 *********** 2025-05-13 23:43:36.397424 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:43:36.397429 | orchestrator | 2025-05-13 23:43:36.397435 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-05-13 23:43:36.397445 | orchestrator | Tuesday 13 May 2025 23:36:19 +0000 (0:00:00.570) 0:04:10.280 *********** 2025-05-13 23:43:36.397450 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:43:36.397456 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:43:36.397461 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:43:36.397466 | orchestrator | 2025-05-13 23:43:36.397472 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-05-13 23:43:36.397477 | orchestrator | Tuesday 13 May 2025 23:36:20 +0000 (0:00:00.775) 0:04:11.056 *********** 2025-05-13 23:43:36.397482 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.397488 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.397493 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.397498 | orchestrator | 2025-05-13 23:43:36.397503 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-05-13 23:43:36.397509 | orchestrator | Tuesday 13 May 2025 23:36:20 +0000 (0:00:00.301) 0:04:11.357 *********** 2025-05-13 23:43:36.397514 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.397522 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.397528 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.397533 | orchestrator | 2025-05-13 23:43:36.397539 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-05-13 23:43:36.397544 | orchestrator | Tuesday 13 May 2025 23:36:20 +0000 (0:00:00.291) 0:04:11.649 *********** 2025-05-13 23:43:36.397549 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.397555 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.397560 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.397565 | orchestrator | 2025-05-13 23:43:36.397570 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-05-13 23:43:36.397576 | orchestrator | Tuesday 13 May 2025 23:36:21 +0000 (0:00:00.429) 0:04:12.078 *********** 2025-05-13 23:43:36.397581 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:43:36.397586 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:43:36.397592 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:43:36.397597 | orchestrator | 2025-05-13 23:43:36.397602 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-05-13 23:43:36.397608 | orchestrator | Tuesday 13 May 2025 23:36:21 +0000 (0:00:00.626) 0:04:12.705 *********** 2025-05-13 23:43:36.397613 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.397618 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.397624 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.397629 | orchestrator | 2025-05-13 23:43:36.397634 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-05-13 23:43:36.397659 | orchestrator | Tuesday 13 May 2025 23:36:21 +0000 (0:00:00.267) 0:04:12.972 *********** 2025-05-13 23:43:36.397668 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.397673 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.397679 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.397684 | orchestrator | 2025-05-13 23:43:36.397689 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-05-13 23:43:36.397702 | orchestrator | Tuesday 13 May 2025 23:36:22 +0000 (0:00:00.240) 0:04:13.213 *********** 2025-05-13 23:43:36.397707 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:43:36.397713 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:43:36.397718 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:43:36.397723 | orchestrator | 2025-05-13 23:43:36.397729 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-05-13 23:43:36.397734 | orchestrator | Tuesday 13 May 2025 23:36:23 +0000 (0:00:01.001) 0:04:14.215 *********** 2025-05-13 23:43:36.397739 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:43:36.397745 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:43:36.397750 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:43:36.397755 | orchestrator | 2025-05-13 23:43:36.397760 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-05-13 23:43:36.397766 | orchestrator | Tuesday 13 May 2025 23:36:23 +0000 (0:00:00.707) 0:04:14.922 *********** 2025-05-13 23:43:36.397777 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.397783 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.397788 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.397794 | orchestrator | 2025-05-13 23:43:36.397799 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-05-13 23:43:36.397804 | orchestrator | Tuesday 13 May 2025 23:36:24 +0000 (0:00:00.273) 0:04:15.195 *********** 2025-05-13 23:43:36.397810 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:43:36.397815 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:43:36.397820 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:43:36.397825 | orchestrator | 2025-05-13 23:43:36.397831 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-05-13 23:43:36.397836 | orchestrator | Tuesday 13 May 2025 23:36:24 +0000 (0:00:00.276) 0:04:15.472 *********** 2025-05-13 23:43:36.397841 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.397847 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.397852 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.397858 | orchestrator | 2025-05-13 23:43:36.397863 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-05-13 23:43:36.397868 | orchestrator | Tuesday 13 May 2025 23:36:24 +0000 (0:00:00.508) 0:04:15.981 *********** 2025-05-13 23:43:36.397873 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.397879 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.397884 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.397889 | orchestrator | 2025-05-13 23:43:36.397895 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-05-13 23:43:36.397900 | orchestrator | Tuesday 13 May 2025 23:36:25 +0000 (0:00:00.440) 0:04:16.421 *********** 2025-05-13 23:43:36.397906 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.397911 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.397916 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.397922 | orchestrator | 2025-05-13 23:43:36.397927 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-05-13 23:43:36.397932 | orchestrator | Tuesday 13 May 2025 23:36:25 +0000 (0:00:00.343) 0:04:16.764 *********** 2025-05-13 23:43:36.397937 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.397943 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.397948 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.397953 | orchestrator | 2025-05-13 23:43:36.397959 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-05-13 23:43:36.397964 | orchestrator | Tuesday 13 May 2025 23:36:26 +0000 (0:00:00.258) 0:04:17.023 *********** 2025-05-13 23:43:36.397969 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.397975 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.397980 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.397985 | orchestrator | 2025-05-13 23:43:36.397991 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-05-13 23:43:36.397996 | orchestrator | Tuesday 13 May 2025 23:36:26 +0000 (0:00:00.458) 0:04:17.481 *********** 2025-05-13 23:43:36.398001 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:43:36.398007 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:43:36.398012 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:43:36.398108 | orchestrator | 2025-05-13 23:43:36.398120 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-05-13 23:43:36.398129 | orchestrator | Tuesday 13 May 2025 23:36:26 +0000 (0:00:00.378) 0:04:17.859 *********** 2025-05-13 23:43:36.398136 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:43:36.398142 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:43:36.398147 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:43:36.398152 | orchestrator | 2025-05-13 23:43:36.398163 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-05-13 23:43:36.398168 | orchestrator | Tuesday 13 May 2025 23:36:27 +0000 (0:00:00.409) 0:04:18.269 *********** 2025-05-13 23:43:36.398173 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:43:36.398185 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:43:36.398190 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:43:36.398195 | orchestrator | 2025-05-13 23:43:36.398201 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2025-05-13 23:43:36.398207 | orchestrator | Tuesday 13 May 2025 23:36:27 +0000 (0:00:00.625) 0:04:18.894 *********** 2025-05-13 23:43:36.398212 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:43:36.398217 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:43:36.398222 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:43:36.398227 | orchestrator | 2025-05-13 23:43:36.398233 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2025-05-13 23:43:36.398238 | orchestrator | Tuesday 13 May 2025 23:36:28 +0000 (0:00:00.448) 0:04:19.343 *********** 2025-05-13 23:43:36.398243 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:43:36.398249 | orchestrator | 2025-05-13 23:43:36.398254 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2025-05-13 23:43:36.398259 | orchestrator | Tuesday 13 May 2025 23:36:28 +0000 (0:00:00.584) 0:04:19.928 *********** 2025-05-13 23:43:36.398264 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.398270 | orchestrator | 2025-05-13 23:43:36.398275 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2025-05-13 23:43:36.398280 | orchestrator | Tuesday 13 May 2025 23:36:29 +0000 (0:00:00.337) 0:04:20.265 *********** 2025-05-13 23:43:36.398286 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-05-13 23:43:36.398291 | orchestrator | 2025-05-13 23:43:36.398304 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2025-05-13 23:43:36.398309 | orchestrator | Tuesday 13 May 2025 23:36:30 +0000 (0:00:01.064) 0:04:21.330 *********** 2025-05-13 23:43:36.398315 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:43:36.398320 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:43:36.398325 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:43:36.398330 | orchestrator | 2025-05-13 23:43:36.398336 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2025-05-13 23:43:36.398341 | orchestrator | Tuesday 13 May 2025 23:36:30 +0000 (0:00:00.319) 0:04:21.650 *********** 2025-05-13 23:43:36.398347 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:43:36.398352 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:43:36.398357 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:43:36.398363 | orchestrator | 2025-05-13 23:43:36.398368 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2025-05-13 23:43:36.398373 | orchestrator | Tuesday 13 May 2025 23:36:31 +0000 (0:00:00.476) 0:04:22.127 *********** 2025-05-13 23:43:36.398379 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:43:36.398384 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:43:36.398389 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:43:36.398394 | orchestrator | 2025-05-13 23:43:36.398403 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2025-05-13 23:43:36.398412 | orchestrator | Tuesday 13 May 2025 23:36:32 +0000 (0:00:01.500) 0:04:23.628 *********** 2025-05-13 23:43:36.398421 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:43:36.398430 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:43:36.398438 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:43:36.398447 | orchestrator | 2025-05-13 23:43:36.398454 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2025-05-13 23:43:36.398462 | orchestrator | Tuesday 13 May 2025 23:36:33 +0000 (0:00:00.905) 0:04:24.533 *********** 2025-05-13 23:43:36.398471 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:43:36.398480 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:43:36.398488 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:43:36.398496 | orchestrator | 2025-05-13 23:43:36.398504 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2025-05-13 23:43:36.398513 | orchestrator | Tuesday 13 May 2025 23:36:34 +0000 (0:00:00.569) 0:04:25.102 *********** 2025-05-13 23:43:36.398522 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:43:36.398560 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:43:36.398570 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:43:36.398578 | orchestrator | 2025-05-13 23:43:36.398588 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2025-05-13 23:43:36.398597 | orchestrator | Tuesday 13 May 2025 23:36:34 +0000 (0:00:00.633) 0:04:25.736 *********** 2025-05-13 23:43:36.398606 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:43:36.398615 | orchestrator | 2025-05-13 23:43:36.398620 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2025-05-13 23:43:36.398625 | orchestrator | Tuesday 13 May 2025 23:36:35 +0000 (0:00:01.190) 0:04:26.927 *********** 2025-05-13 23:43:36.398631 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:43:36.398652 | orchestrator | 2025-05-13 23:43:36.398659 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2025-05-13 23:43:36.398664 | orchestrator | Tuesday 13 May 2025 23:36:36 +0000 (0:00:00.688) 0:04:27.615 *********** 2025-05-13 23:43:36.398670 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-13 23:43:36.398675 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-13 23:43:36.398680 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-13 23:43:36.398685 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-13 23:43:36.398691 | orchestrator | ok: [testbed-node-1] => (item=None) 2025-05-13 23:43:36.398696 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-13 23:43:36.398702 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-13 23:43:36.398707 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2025-05-13 23:43:36.398712 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-13 23:43:36.398718 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2025-05-13 23:43:36.398728 | orchestrator | ok: [testbed-node-2] => (item=None) 2025-05-13 23:43:36.398733 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2025-05-13 23:43:36.398739 | orchestrator | 2025-05-13 23:43:36.398744 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2025-05-13 23:43:36.398749 | orchestrator | Tuesday 13 May 2025 23:36:39 +0000 (0:00:03.359) 0:04:30.974 *********** 2025-05-13 23:43:36.398754 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:43:36.398760 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:43:36.398765 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:43:36.398770 | orchestrator | 2025-05-13 23:43:36.398776 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2025-05-13 23:43:36.398781 | orchestrator | Tuesday 13 May 2025 23:36:41 +0000 (0:00:01.557) 0:04:32.532 *********** 2025-05-13 23:43:36.398786 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:43:36.398792 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:43:36.398797 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:43:36.398802 | orchestrator | 2025-05-13 23:43:36.398807 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2025-05-13 23:43:36.398813 | orchestrator | Tuesday 13 May 2025 23:36:41 +0000 (0:00:00.362) 0:04:32.895 *********** 2025-05-13 23:43:36.398818 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:43:36.398823 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:43:36.398828 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:43:36.398834 | orchestrator | 2025-05-13 23:43:36.398839 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2025-05-13 23:43:36.398844 | orchestrator | Tuesday 13 May 2025 23:36:42 +0000 (0:00:00.334) 0:04:33.229 *********** 2025-05-13 23:43:36.398851 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:43:36.398860 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:43:36.398868 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:43:36.398877 | orchestrator | 2025-05-13 23:43:36.398885 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2025-05-13 23:43:36.398902 | orchestrator | Tuesday 13 May 2025 23:36:44 +0000 (0:00:01.963) 0:04:35.193 *********** 2025-05-13 23:43:36.398914 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:43:36.398920 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:43:36.398925 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:43:36.398931 | orchestrator | 2025-05-13 23:43:36.398936 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2025-05-13 23:43:36.398941 | orchestrator | Tuesday 13 May 2025 23:36:45 +0000 (0:00:01.695) 0:04:36.888 *********** 2025-05-13 23:43:36.398947 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.398952 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.398957 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.398963 | orchestrator | 2025-05-13 23:43:36.398968 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2025-05-13 23:43:36.398973 | orchestrator | Tuesday 13 May 2025 23:36:46 +0000 (0:00:00.357) 0:04:37.246 *********** 2025-05-13 23:43:36.398979 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:43:36.398984 | orchestrator | 2025-05-13 23:43:36.398990 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2025-05-13 23:43:36.398995 | orchestrator | Tuesday 13 May 2025 23:36:46 +0000 (0:00:00.528) 0:04:37.774 *********** 2025-05-13 23:43:36.399001 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.399006 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.399011 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.399017 | orchestrator | 2025-05-13 23:43:36.399022 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2025-05-13 23:43:36.399028 | orchestrator | Tuesday 13 May 2025 23:36:47 +0000 (0:00:00.535) 0:04:38.310 *********** 2025-05-13 23:43:36.399033 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.399038 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.399044 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.399049 | orchestrator | 2025-05-13 23:43:36.399054 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2025-05-13 23:43:36.399060 | orchestrator | Tuesday 13 May 2025 23:36:47 +0000 (0:00:00.317) 0:04:38.627 *********** 2025-05-13 23:43:36.399065 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:43:36.399071 | orchestrator | 2025-05-13 23:43:36.399076 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2025-05-13 23:43:36.399081 | orchestrator | Tuesday 13 May 2025 23:36:48 +0000 (0:00:00.558) 0:04:39.185 *********** 2025-05-13 23:43:36.399087 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:43:36.399092 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:43:36.399097 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:43:36.399103 | orchestrator | 2025-05-13 23:43:36.399108 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2025-05-13 23:43:36.399113 | orchestrator | Tuesday 13 May 2025 23:36:50 +0000 (0:00:02.009) 0:04:41.195 *********** 2025-05-13 23:43:36.399119 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:43:36.399124 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:43:36.399130 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:43:36.399135 | orchestrator | 2025-05-13 23:43:36.399140 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2025-05-13 23:43:36.399146 | orchestrator | Tuesday 13 May 2025 23:36:51 +0000 (0:00:01.316) 0:04:42.511 *********** 2025-05-13 23:43:36.399151 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:43:36.399156 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:43:36.399162 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:43:36.399167 | orchestrator | 2025-05-13 23:43:36.399172 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2025-05-13 23:43:36.399178 | orchestrator | Tuesday 13 May 2025 23:36:53 +0000 (0:00:01.860) 0:04:44.372 *********** 2025-05-13 23:43:36.399183 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:43:36.399206 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:43:36.399212 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:43:36.399217 | orchestrator | 2025-05-13 23:43:36.399223 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2025-05-13 23:43:36.399228 | orchestrator | Tuesday 13 May 2025 23:36:55 +0000 (0:00:02.302) 0:04:46.675 *********** 2025-05-13 23:43:36.399237 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:43:36.399243 | orchestrator | 2025-05-13 23:43:36.399248 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2025-05-13 23:43:36.399253 | orchestrator | Tuesday 13 May 2025 23:36:56 +0000 (0:00:00.845) 0:04:47.520 *********** 2025-05-13 23:43:36.399259 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2025-05-13 23:43:36.399264 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:43:36.399270 | orchestrator | 2025-05-13 23:43:36.399275 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2025-05-13 23:43:36.399281 | orchestrator | Tuesday 13 May 2025 23:37:18 +0000 (0:00:21.824) 0:05:09.345 *********** 2025-05-13 23:43:36.399286 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:43:36.399292 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:43:36.399297 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:43:36.399302 | orchestrator | 2025-05-13 23:43:36.399308 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2025-05-13 23:43:36.399313 | orchestrator | Tuesday 13 May 2025 23:37:28 +0000 (0:00:10.168) 0:05:19.513 *********** 2025-05-13 23:43:36.399319 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.399324 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.399329 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.399335 | orchestrator | 2025-05-13 23:43:36.399340 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2025-05-13 23:43:36.399345 | orchestrator | Tuesday 13 May 2025 23:37:29 +0000 (0:00:00.526) 0:05:20.039 *********** 2025-05-13 23:43:36.399356 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__df6b42027cb5f79e66f136b09854134f8fd00308'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2025-05-13 23:43:36.399364 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__df6b42027cb5f79e66f136b09854134f8fd00308'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2025-05-13 23:43:36.399370 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__df6b42027cb5f79e66f136b09854134f8fd00308'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2025-05-13 23:43:36.399377 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__df6b42027cb5f79e66f136b09854134f8fd00308'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2025-05-13 23:43:36.399383 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__df6b42027cb5f79e66f136b09854134f8fd00308'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2025-05-13 23:43:36.399396 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__df6b42027cb5f79e66f136b09854134f8fd00308'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__df6b42027cb5f79e66f136b09854134f8fd00308'}])  2025-05-13 23:43:36.399403 | orchestrator | 2025-05-13 23:43:36.399408 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-05-13 23:43:36.399413 | orchestrator | Tuesday 13 May 2025 23:37:43 +0000 (0:00:14.617) 0:05:34.657 *********** 2025-05-13 23:43:36.399419 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.399424 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.399429 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.399435 | orchestrator | 2025-05-13 23:43:36.399440 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-05-13 23:43:36.399445 | orchestrator | Tuesday 13 May 2025 23:37:43 +0000 (0:00:00.331) 0:05:34.989 *********** 2025-05-13 23:43:36.399451 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:43:36.399456 | orchestrator | 2025-05-13 23:43:36.399461 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-05-13 23:43:36.399470 | orchestrator | Tuesday 13 May 2025 23:37:44 +0000 (0:00:00.845) 0:05:35.835 *********** 2025-05-13 23:43:36.399475 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:43:36.399481 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:43:36.399486 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:43:36.399492 | orchestrator | 2025-05-13 23:43:36.399497 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-05-13 23:43:36.399502 | orchestrator | Tuesday 13 May 2025 23:37:45 +0000 (0:00:00.386) 0:05:36.222 *********** 2025-05-13 23:43:36.399508 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.399513 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.399519 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.399524 | orchestrator | 2025-05-13 23:43:36.399529 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-05-13 23:43:36.399535 | orchestrator | Tuesday 13 May 2025 23:37:45 +0000 (0:00:00.409) 0:05:36.631 *********** 2025-05-13 23:43:36.399540 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-13 23:43:36.399546 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-13 23:43:36.399551 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-13 23:43:36.399556 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.399562 | orchestrator | 2025-05-13 23:43:36.399567 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-05-13 23:43:36.399573 | orchestrator | Tuesday 13 May 2025 23:37:46 +0000 (0:00:00.872) 0:05:37.504 *********** 2025-05-13 23:43:36.399578 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:43:36.399583 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:43:36.399589 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:43:36.399594 | orchestrator | 2025-05-13 23:43:36.399599 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-05-13 23:43:36.399605 | orchestrator | 2025-05-13 23:43:36.399610 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-05-13 23:43:36.399619 | orchestrator | Tuesday 13 May 2025 23:37:47 +0000 (0:00:00.845) 0:05:38.349 *********** 2025-05-13 23:43:36.399630 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:43:36.399678 | orchestrator | 2025-05-13 23:43:36.399689 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-05-13 23:43:36.399697 | orchestrator | Tuesday 13 May 2025 23:37:47 +0000 (0:00:00.502) 0:05:38.852 *********** 2025-05-13 23:43:36.399713 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:43:36.399722 | orchestrator | 2025-05-13 23:43:36.399731 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-05-13 23:43:36.399740 | orchestrator | Tuesday 13 May 2025 23:37:48 +0000 (0:00:00.764) 0:05:39.617 *********** 2025-05-13 23:43:36.399749 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:43:36.399758 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:43:36.399767 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:43:36.399776 | orchestrator | 2025-05-13 23:43:36.399784 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-05-13 23:43:36.399793 | orchestrator | Tuesday 13 May 2025 23:37:49 +0000 (0:00:00.752) 0:05:40.370 *********** 2025-05-13 23:43:36.399803 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.399812 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.399821 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.399830 | orchestrator | 2025-05-13 23:43:36.399840 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-05-13 23:43:36.399848 | orchestrator | Tuesday 13 May 2025 23:37:49 +0000 (0:00:00.320) 0:05:40.691 *********** 2025-05-13 23:43:36.399857 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.399865 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.399874 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.399882 | orchestrator | 2025-05-13 23:43:36.399891 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-05-13 23:43:36.399896 | orchestrator | Tuesday 13 May 2025 23:37:50 +0000 (0:00:00.562) 0:05:41.253 *********** 2025-05-13 23:43:36.399901 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.399907 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.399912 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.399917 | orchestrator | 2025-05-13 23:43:36.399923 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-05-13 23:43:36.399928 | orchestrator | Tuesday 13 May 2025 23:37:50 +0000 (0:00:00.357) 0:05:41.611 *********** 2025-05-13 23:43:36.399933 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:43:36.399938 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:43:36.399944 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:43:36.399949 | orchestrator | 2025-05-13 23:43:36.399954 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-05-13 23:43:36.399959 | orchestrator | Tuesday 13 May 2025 23:37:51 +0000 (0:00:00.690) 0:05:42.301 *********** 2025-05-13 23:43:36.399965 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.399970 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.399975 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.399980 | orchestrator | 2025-05-13 23:43:36.399986 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-05-13 23:43:36.399991 | orchestrator | Tuesday 13 May 2025 23:37:51 +0000 (0:00:00.322) 0:05:42.624 *********** 2025-05-13 23:43:36.399996 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.400001 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.400007 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.400012 | orchestrator | 2025-05-13 23:43:36.400017 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-05-13 23:43:36.400023 | orchestrator | Tuesday 13 May 2025 23:37:52 +0000 (0:00:00.558) 0:05:43.182 *********** 2025-05-13 23:43:36.400028 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:43:36.400033 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:43:36.400038 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:43:36.400044 | orchestrator | 2025-05-13 23:43:36.400049 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-05-13 23:43:36.400055 | orchestrator | Tuesday 13 May 2025 23:37:52 +0000 (0:00:00.735) 0:05:43.918 *********** 2025-05-13 23:43:36.400065 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:43:36.400071 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:43:36.400081 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:43:36.400087 | orchestrator | 2025-05-13 23:43:36.400092 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-05-13 23:43:36.400098 | orchestrator | Tuesday 13 May 2025 23:37:53 +0000 (0:00:00.719) 0:05:44.637 *********** 2025-05-13 23:43:36.400103 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.400108 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.400113 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.400119 | orchestrator | 2025-05-13 23:43:36.400124 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-05-13 23:43:36.400129 | orchestrator | Tuesday 13 May 2025 23:37:53 +0000 (0:00:00.332) 0:05:44.970 *********** 2025-05-13 23:43:36.400135 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:43:36.400140 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:43:36.400145 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:43:36.400151 | orchestrator | 2025-05-13 23:43:36.400156 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-05-13 23:43:36.400161 | orchestrator | Tuesday 13 May 2025 23:37:54 +0000 (0:00:00.631) 0:05:45.601 *********** 2025-05-13 23:43:36.400167 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.400172 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.400178 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.400183 | orchestrator | 2025-05-13 23:43:36.400188 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-05-13 23:43:36.400194 | orchestrator | Tuesday 13 May 2025 23:37:54 +0000 (0:00:00.323) 0:05:45.925 *********** 2025-05-13 23:43:36.400199 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.400204 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.400209 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.400214 | orchestrator | 2025-05-13 23:43:36.400228 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-05-13 23:43:36.400233 | orchestrator | Tuesday 13 May 2025 23:37:55 +0000 (0:00:00.312) 0:05:46.238 *********** 2025-05-13 23:43:36.400238 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.400242 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.400247 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.400252 | orchestrator | 2025-05-13 23:43:36.400257 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-05-13 23:43:36.400261 | orchestrator | Tuesday 13 May 2025 23:37:55 +0000 (0:00:00.305) 0:05:46.543 *********** 2025-05-13 23:43:36.400266 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.400271 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.400275 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.400280 | orchestrator | 2025-05-13 23:43:36.400285 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-05-13 23:43:36.400289 | orchestrator | Tuesday 13 May 2025 23:37:56 +0000 (0:00:00.555) 0:05:47.099 *********** 2025-05-13 23:43:36.400294 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.400299 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.400303 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.400308 | orchestrator | 2025-05-13 23:43:36.400313 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-05-13 23:43:36.400318 | orchestrator | Tuesday 13 May 2025 23:37:56 +0000 (0:00:00.318) 0:05:47.417 *********** 2025-05-13 23:43:36.400322 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:43:36.400327 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:43:36.400332 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:43:36.400336 | orchestrator | 2025-05-13 23:43:36.400341 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-05-13 23:43:36.400346 | orchestrator | Tuesday 13 May 2025 23:37:56 +0000 (0:00:00.367) 0:05:47.785 *********** 2025-05-13 23:43:36.400350 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:43:36.400355 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:43:36.400360 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:43:36.400364 | orchestrator | 2025-05-13 23:43:36.400373 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-05-13 23:43:36.400378 | orchestrator | Tuesday 13 May 2025 23:37:57 +0000 (0:00:00.363) 0:05:48.148 *********** 2025-05-13 23:43:36.400382 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:43:36.400387 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:43:36.400392 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:43:36.400396 | orchestrator | 2025-05-13 23:43:36.400401 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2025-05-13 23:43:36.400406 | orchestrator | Tuesday 13 May 2025 23:37:57 +0000 (0:00:00.829) 0:05:48.978 *********** 2025-05-13 23:43:36.400410 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-13 23:43:36.400415 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-13 23:43:36.400420 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-13 23:43:36.400424 | orchestrator | 2025-05-13 23:43:36.400429 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2025-05-13 23:43:36.400434 | orchestrator | Tuesday 13 May 2025 23:37:58 +0000 (0:00:00.627) 0:05:49.606 *********** 2025-05-13 23:43:36.400439 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:43:36.400443 | orchestrator | 2025-05-13 23:43:36.400448 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2025-05-13 23:43:36.400453 | orchestrator | Tuesday 13 May 2025 23:37:59 +0000 (0:00:00.502) 0:05:50.108 *********** 2025-05-13 23:43:36.400458 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:43:36.400462 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:43:36.400467 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:43:36.400472 | orchestrator | 2025-05-13 23:43:36.400477 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2025-05-13 23:43:36.400481 | orchestrator | Tuesday 13 May 2025 23:38:00 +0000 (0:00:00.962) 0:05:51.071 *********** 2025-05-13 23:43:36.400486 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.400491 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.400496 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.400500 | orchestrator | 2025-05-13 23:43:36.400505 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2025-05-13 23:43:36.400513 | orchestrator | Tuesday 13 May 2025 23:38:00 +0000 (0:00:00.331) 0:05:51.403 *********** 2025-05-13 23:43:36.400518 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-13 23:43:36.400522 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-13 23:43:36.400527 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-13 23:43:36.400532 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-05-13 23:43:36.400536 | orchestrator | 2025-05-13 23:43:36.400541 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2025-05-13 23:43:36.400546 | orchestrator | Tuesday 13 May 2025 23:38:11 +0000 (0:00:10.913) 0:06:02.317 *********** 2025-05-13 23:43:36.400550 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:43:36.400555 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:43:36.400560 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:43:36.400564 | orchestrator | 2025-05-13 23:43:36.400569 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2025-05-13 23:43:36.400573 | orchestrator | Tuesday 13 May 2025 23:38:11 +0000 (0:00:00.345) 0:06:02.662 *********** 2025-05-13 23:43:36.400578 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-05-13 23:43:36.400583 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-13 23:43:36.400588 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-13 23:43:36.400592 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-05-13 23:43:36.400597 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-13 23:43:36.400602 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-13 23:43:36.400610 | orchestrator | 2025-05-13 23:43:36.400615 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2025-05-13 23:43:36.400623 | orchestrator | Tuesday 13 May 2025 23:38:14 +0000 (0:00:02.536) 0:06:05.199 *********** 2025-05-13 23:43:36.400628 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-05-13 23:43:36.400633 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-13 23:43:36.400651 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-13 23:43:36.400657 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-13 23:43:36.400662 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-05-13 23:43:36.400666 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-05-13 23:43:36.400671 | orchestrator | 2025-05-13 23:43:36.400676 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2025-05-13 23:43:36.400681 | orchestrator | Tuesday 13 May 2025 23:38:15 +0000 (0:00:01.220) 0:06:06.420 *********** 2025-05-13 23:43:36.400686 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:43:36.400690 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:43:36.400695 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:43:36.400700 | orchestrator | 2025-05-13 23:43:36.400705 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2025-05-13 23:43:36.400709 | orchestrator | Tuesday 13 May 2025 23:38:16 +0000 (0:00:00.746) 0:06:07.166 *********** 2025-05-13 23:43:36.400715 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.400722 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.400732 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.400743 | orchestrator | 2025-05-13 23:43:36.400753 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2025-05-13 23:43:36.400761 | orchestrator | Tuesday 13 May 2025 23:38:16 +0000 (0:00:00.304) 0:06:07.470 *********** 2025-05-13 23:43:36.400768 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.400775 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.400783 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.400791 | orchestrator | 2025-05-13 23:43:36.400798 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2025-05-13 23:43:36.400806 | orchestrator | Tuesday 13 May 2025 23:38:17 +0000 (0:00:00.576) 0:06:08.047 *********** 2025-05-13 23:43:36.400813 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:43:36.400820 | orchestrator | 2025-05-13 23:43:36.400828 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2025-05-13 23:43:36.400834 | orchestrator | Tuesday 13 May 2025 23:38:17 +0000 (0:00:00.552) 0:06:08.600 *********** 2025-05-13 23:43:36.400842 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.400849 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.400857 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.400865 | orchestrator | 2025-05-13 23:43:36.400874 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2025-05-13 23:43:36.400882 | orchestrator | Tuesday 13 May 2025 23:38:17 +0000 (0:00:00.308) 0:06:08.909 *********** 2025-05-13 23:43:36.400889 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.400897 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.400904 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.400909 | orchestrator | 2025-05-13 23:43:36.400913 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2025-05-13 23:43:36.400918 | orchestrator | Tuesday 13 May 2025 23:38:18 +0000 (0:00:00.628) 0:06:09.537 *********** 2025-05-13 23:43:36.400923 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:43:36.400927 | orchestrator | 2025-05-13 23:43:36.400932 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2025-05-13 23:43:36.400937 | orchestrator | Tuesday 13 May 2025 23:38:19 +0000 (0:00:00.566) 0:06:10.104 *********** 2025-05-13 23:43:36.400941 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:43:36.400946 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:43:36.400956 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:43:36.400961 | orchestrator | 2025-05-13 23:43:36.400966 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2025-05-13 23:43:36.400971 | orchestrator | Tuesday 13 May 2025 23:38:20 +0000 (0:00:01.276) 0:06:11.381 *********** 2025-05-13 23:43:36.400975 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:43:36.400980 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:43:36.400985 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:43:36.400989 | orchestrator | 2025-05-13 23:43:36.400998 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2025-05-13 23:43:36.401003 | orchestrator | Tuesday 13 May 2025 23:38:21 +0000 (0:00:01.359) 0:06:12.740 *********** 2025-05-13 23:43:36.401008 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:43:36.401013 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:43:36.401017 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:43:36.401022 | orchestrator | 2025-05-13 23:43:36.401027 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2025-05-13 23:43:36.401032 | orchestrator | Tuesday 13 May 2025 23:38:23 +0000 (0:00:01.863) 0:06:14.604 *********** 2025-05-13 23:43:36.401036 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:43:36.401041 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:43:36.401046 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:43:36.401050 | orchestrator | 2025-05-13 23:43:36.401055 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2025-05-13 23:43:36.401060 | orchestrator | Tuesday 13 May 2025 23:38:25 +0000 (0:00:02.124) 0:06:16.728 *********** 2025-05-13 23:43:36.401064 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.401069 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.401074 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-05-13 23:43:36.401079 | orchestrator | 2025-05-13 23:43:36.401083 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2025-05-13 23:43:36.401088 | orchestrator | Tuesday 13 May 2025 23:38:26 +0000 (0:00:00.430) 0:06:17.159 *********** 2025-05-13 23:43:36.401093 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2025-05-13 23:43:36.401098 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2025-05-13 23:43:36.401116 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2025-05-13 23:43:36.401121 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2025-05-13 23:43:36.401126 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-05-13 23:43:36.401131 | orchestrator | 2025-05-13 23:43:36.401136 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2025-05-13 23:43:36.401141 | orchestrator | Tuesday 13 May 2025 23:38:50 +0000 (0:00:24.389) 0:06:41.549 *********** 2025-05-13 23:43:36.401145 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-05-13 23:43:36.401150 | orchestrator | 2025-05-13 23:43:36.401155 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-05-13 23:43:36.401160 | orchestrator | Tuesday 13 May 2025 23:38:52 +0000 (0:00:01.813) 0:06:43.363 *********** 2025-05-13 23:43:36.401164 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:43:36.401169 | orchestrator | 2025-05-13 23:43:36.401174 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2025-05-13 23:43:36.401178 | orchestrator | Tuesday 13 May 2025 23:38:52 +0000 (0:00:00.333) 0:06:43.696 *********** 2025-05-13 23:43:36.401183 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:43:36.401188 | orchestrator | 2025-05-13 23:43:36.401192 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2025-05-13 23:43:36.401197 | orchestrator | Tuesday 13 May 2025 23:38:52 +0000 (0:00:00.142) 0:06:43.839 *********** 2025-05-13 23:43:36.401222 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-05-13 23:43:36.401227 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-05-13 23:43:36.401232 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-05-13 23:43:36.401237 | orchestrator | 2025-05-13 23:43:36.401241 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2025-05-13 23:43:36.401246 | orchestrator | Tuesday 13 May 2025 23:38:59 +0000 (0:00:06.406) 0:06:50.246 *********** 2025-05-13 23:43:36.401251 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-05-13 23:43:36.401256 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-05-13 23:43:36.401260 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-05-13 23:43:36.401265 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-05-13 23:43:36.401270 | orchestrator | 2025-05-13 23:43:36.401275 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-05-13 23:43:36.401279 | orchestrator | Tuesday 13 May 2025 23:39:04 +0000 (0:00:04.931) 0:06:55.177 *********** 2025-05-13 23:43:36.401284 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:43:36.401289 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:43:36.401294 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:43:36.401298 | orchestrator | 2025-05-13 23:43:36.401303 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-05-13 23:43:36.401308 | orchestrator | Tuesday 13 May 2025 23:39:05 +0000 (0:00:00.956) 0:06:56.134 *********** 2025-05-13 23:43:36.401313 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:43:36.401318 | orchestrator | 2025-05-13 23:43:36.401322 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-05-13 23:43:36.401327 | orchestrator | Tuesday 13 May 2025 23:39:05 +0000 (0:00:00.532) 0:06:56.667 *********** 2025-05-13 23:43:36.401332 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:43:36.401337 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:43:36.401341 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:43:36.401346 | orchestrator | 2025-05-13 23:43:36.401351 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-05-13 23:43:36.401355 | orchestrator | Tuesday 13 May 2025 23:39:05 +0000 (0:00:00.298) 0:06:56.966 *********** 2025-05-13 23:43:36.401360 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:43:36.401365 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:43:36.401370 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:43:36.401375 | orchestrator | 2025-05-13 23:43:36.401382 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-05-13 23:43:36.401387 | orchestrator | Tuesday 13 May 2025 23:39:07 +0000 (0:00:01.459) 0:06:58.425 *********** 2025-05-13 23:43:36.401392 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-13 23:43:36.401397 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-13 23:43:36.401401 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-13 23:43:36.401406 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.401411 | orchestrator | 2025-05-13 23:43:36.401416 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-05-13 23:43:36.401420 | orchestrator | Tuesday 13 May 2025 23:39:08 +0000 (0:00:00.632) 0:06:59.058 *********** 2025-05-13 23:43:36.401425 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:43:36.401430 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:43:36.401435 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:43:36.401439 | orchestrator | 2025-05-13 23:43:36.401444 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-05-13 23:43:36.401449 | orchestrator | 2025-05-13 23:43:36.401454 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-05-13 23:43:36.401458 | orchestrator | Tuesday 13 May 2025 23:39:08 +0000 (0:00:00.586) 0:06:59.645 *********** 2025-05-13 23:43:36.401467 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:43:36.401472 | orchestrator | 2025-05-13 23:43:36.401477 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-05-13 23:43:36.401482 | orchestrator | Tuesday 13 May 2025 23:39:09 +0000 (0:00:00.829) 0:07:00.474 *********** 2025-05-13 23:43:36.401490 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:43:36.401495 | orchestrator | 2025-05-13 23:43:36.401500 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-05-13 23:43:36.401505 | orchestrator | Tuesday 13 May 2025 23:39:10 +0000 (0:00:00.547) 0:07:01.021 *********** 2025-05-13 23:43:36.401510 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.401514 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.401519 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.401524 | orchestrator | 2025-05-13 23:43:36.401529 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-05-13 23:43:36.401533 | orchestrator | Tuesday 13 May 2025 23:39:10 +0000 (0:00:00.380) 0:07:01.402 *********** 2025-05-13 23:43:36.401538 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.401543 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.401548 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.401552 | orchestrator | 2025-05-13 23:43:36.401557 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-05-13 23:43:36.401562 | orchestrator | Tuesday 13 May 2025 23:39:11 +0000 (0:00:01.012) 0:07:02.415 *********** 2025-05-13 23:43:36.401567 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.401571 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.401576 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.401581 | orchestrator | 2025-05-13 23:43:36.401586 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-05-13 23:43:36.401590 | orchestrator | Tuesday 13 May 2025 23:39:12 +0000 (0:00:00.654) 0:07:03.069 *********** 2025-05-13 23:43:36.401595 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.401600 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.401604 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.401609 | orchestrator | 2025-05-13 23:43:36.401614 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-05-13 23:43:36.401619 | orchestrator | Tuesday 13 May 2025 23:39:12 +0000 (0:00:00.650) 0:07:03.720 *********** 2025-05-13 23:43:36.401623 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.401628 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.401633 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.401654 | orchestrator | 2025-05-13 23:43:36.401663 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-05-13 23:43:36.401672 | orchestrator | Tuesday 13 May 2025 23:39:13 +0000 (0:00:00.298) 0:07:04.019 *********** 2025-05-13 23:43:36.401680 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.401688 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.401697 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.401705 | orchestrator | 2025-05-13 23:43:36.401714 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-05-13 23:43:36.401723 | orchestrator | Tuesday 13 May 2025 23:39:13 +0000 (0:00:00.602) 0:07:04.622 *********** 2025-05-13 23:43:36.401731 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.401740 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.401749 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.401754 | orchestrator | 2025-05-13 23:43:36.401759 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-05-13 23:43:36.401764 | orchestrator | Tuesday 13 May 2025 23:39:13 +0000 (0:00:00.301) 0:07:04.923 *********** 2025-05-13 23:43:36.401769 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.401773 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.401783 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.401788 | orchestrator | 2025-05-13 23:43:36.401792 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-05-13 23:43:36.401797 | orchestrator | Tuesday 13 May 2025 23:39:14 +0000 (0:00:00.654) 0:07:05.578 *********** 2025-05-13 23:43:36.401802 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.401807 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.401811 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.401816 | orchestrator | 2025-05-13 23:43:36.401821 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-05-13 23:43:36.401825 | orchestrator | Tuesday 13 May 2025 23:39:15 +0000 (0:00:00.654) 0:07:06.233 *********** 2025-05-13 23:43:36.401830 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.401835 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.401839 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.401844 | orchestrator | 2025-05-13 23:43:36.401849 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-05-13 23:43:36.401858 | orchestrator | Tuesday 13 May 2025 23:39:15 +0000 (0:00:00.588) 0:07:06.821 *********** 2025-05-13 23:43:36.401863 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.401868 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.401873 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.401877 | orchestrator | 2025-05-13 23:43:36.401882 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-05-13 23:43:36.401887 | orchestrator | Tuesday 13 May 2025 23:39:16 +0000 (0:00:00.298) 0:07:07.120 *********** 2025-05-13 23:43:36.401891 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.401896 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.401901 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.401906 | orchestrator | 2025-05-13 23:43:36.401910 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-05-13 23:43:36.401916 | orchestrator | Tuesday 13 May 2025 23:39:16 +0000 (0:00:00.310) 0:07:07.431 *********** 2025-05-13 23:43:36.401921 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.401925 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.401930 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.401935 | orchestrator | 2025-05-13 23:43:36.401940 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-05-13 23:43:36.401944 | orchestrator | Tuesday 13 May 2025 23:39:16 +0000 (0:00:00.297) 0:07:07.728 *********** 2025-05-13 23:43:36.401949 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.401954 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.401959 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.401963 | orchestrator | 2025-05-13 23:43:36.401968 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-05-13 23:43:36.401973 | orchestrator | Tuesday 13 May 2025 23:39:17 +0000 (0:00:00.470) 0:07:08.198 *********** 2025-05-13 23:43:36.401977 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.401982 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.401987 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.401992 | orchestrator | 2025-05-13 23:43:36.402000 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-05-13 23:43:36.402005 | orchestrator | Tuesday 13 May 2025 23:39:17 +0000 (0:00:00.260) 0:07:08.458 *********** 2025-05-13 23:43:36.402010 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.402034 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.402041 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.402049 | orchestrator | 2025-05-13 23:43:36.402056 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-05-13 23:43:36.402065 | orchestrator | Tuesday 13 May 2025 23:39:17 +0000 (0:00:00.296) 0:07:08.755 *********** 2025-05-13 23:43:36.402077 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.402086 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.402093 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.402101 | orchestrator | 2025-05-13 23:43:36.402121 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-05-13 23:43:36.402128 | orchestrator | Tuesday 13 May 2025 23:39:18 +0000 (0:00:00.257) 0:07:09.013 *********** 2025-05-13 23:43:36.402135 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.402143 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.402150 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.402157 | orchestrator | 2025-05-13 23:43:36.402165 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-05-13 23:43:36.402172 | orchestrator | Tuesday 13 May 2025 23:39:18 +0000 (0:00:00.451) 0:07:09.464 *********** 2025-05-13 23:43:36.402180 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.402187 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.402195 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.402202 | orchestrator | 2025-05-13 23:43:36.402210 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2025-05-13 23:43:36.402217 | orchestrator | Tuesday 13 May 2025 23:39:18 +0000 (0:00:00.511) 0:07:09.976 *********** 2025-05-13 23:43:36.402225 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.402232 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.402240 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.402247 | orchestrator | 2025-05-13 23:43:36.402255 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2025-05-13 23:43:36.402263 | orchestrator | Tuesday 13 May 2025 23:39:19 +0000 (0:00:00.312) 0:07:10.289 *********** 2025-05-13 23:43:36.402271 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-13 23:43:36.402279 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-13 23:43:36.402287 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-13 23:43:36.402295 | orchestrator | 2025-05-13 23:43:36.402300 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2025-05-13 23:43:36.402304 | orchestrator | Tuesday 13 May 2025 23:39:20 +0000 (0:00:00.951) 0:07:11.240 *********** 2025-05-13 23:43:36.402309 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:43:36.402314 | orchestrator | 2025-05-13 23:43:36.402318 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2025-05-13 23:43:36.402323 | orchestrator | Tuesday 13 May 2025 23:39:20 +0000 (0:00:00.577) 0:07:11.818 *********** 2025-05-13 23:43:36.402328 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.402333 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.402338 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.402342 | orchestrator | 2025-05-13 23:43:36.402347 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2025-05-13 23:43:36.402351 | orchestrator | Tuesday 13 May 2025 23:39:21 +0000 (0:00:00.330) 0:07:12.148 *********** 2025-05-13 23:43:36.402356 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.402361 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.402366 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.402370 | orchestrator | 2025-05-13 23:43:36.402375 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2025-05-13 23:43:36.402380 | orchestrator | Tuesday 13 May 2025 23:39:21 +0000 (0:00:00.529) 0:07:12.677 *********** 2025-05-13 23:43:36.402384 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.402389 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.402394 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.402398 | orchestrator | 2025-05-13 23:43:36.402407 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2025-05-13 23:43:36.402412 | orchestrator | Tuesday 13 May 2025 23:39:22 +0000 (0:00:00.631) 0:07:13.309 *********** 2025-05-13 23:43:36.402417 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.402422 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.402426 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.402431 | orchestrator | 2025-05-13 23:43:36.402436 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2025-05-13 23:43:36.402446 | orchestrator | Tuesday 13 May 2025 23:39:22 +0000 (0:00:00.331) 0:07:13.641 *********** 2025-05-13 23:43:36.402450 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-05-13 23:43:36.402455 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-05-13 23:43:36.402460 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-05-13 23:43:36.402465 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-05-13 23:43:36.402470 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-05-13 23:43:36.402474 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-05-13 23:43:36.402479 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-05-13 23:43:36.402484 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-05-13 23:43:36.402493 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-05-13 23:43:36.402498 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-05-13 23:43:36.402503 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-05-13 23:43:36.402507 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-05-13 23:43:36.402512 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-05-13 23:43:36.402517 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-05-13 23:43:36.402521 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-05-13 23:43:36.402526 | orchestrator | 2025-05-13 23:43:36.402531 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2025-05-13 23:43:36.402535 | orchestrator | Tuesday 13 May 2025 23:39:24 +0000 (0:00:02.181) 0:07:15.822 *********** 2025-05-13 23:43:36.402540 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.402545 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.402550 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.402554 | orchestrator | 2025-05-13 23:43:36.402559 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2025-05-13 23:43:36.402564 | orchestrator | Tuesday 13 May 2025 23:39:25 +0000 (0:00:00.353) 0:07:16.176 *********** 2025-05-13 23:43:36.402568 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:43:36.402573 | orchestrator | 2025-05-13 23:43:36.402578 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2025-05-13 23:43:36.402582 | orchestrator | Tuesday 13 May 2025 23:39:25 +0000 (0:00:00.808) 0:07:16.985 *********** 2025-05-13 23:43:36.402587 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-05-13 23:43:36.402592 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-05-13 23:43:36.402596 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-05-13 23:43:36.402601 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-05-13 23:43:36.402606 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-05-13 23:43:36.402610 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-05-13 23:43:36.402615 | orchestrator | 2025-05-13 23:43:36.402620 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2025-05-13 23:43:36.402624 | orchestrator | Tuesday 13 May 2025 23:39:26 +0000 (0:00:00.893) 0:07:17.878 *********** 2025-05-13 23:43:36.402629 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-13 23:43:36.402634 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-13 23:43:36.402659 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-13 23:43:36.402665 | orchestrator | 2025-05-13 23:43:36.402670 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2025-05-13 23:43:36.402674 | orchestrator | Tuesday 13 May 2025 23:39:28 +0000 (0:00:01.938) 0:07:19.817 *********** 2025-05-13 23:43:36.402679 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-13 23:43:36.402684 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-13 23:43:36.402689 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:43:36.402693 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-13 23:43:36.402698 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-13 23:43:36.402703 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:43:36.402707 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-13 23:43:36.402712 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-13 23:43:36.402717 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:43:36.402722 | orchestrator | 2025-05-13 23:43:36.402726 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2025-05-13 23:43:36.402731 | orchestrator | Tuesday 13 May 2025 23:39:30 +0000 (0:00:01.511) 0:07:21.328 *********** 2025-05-13 23:43:36.402739 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-13 23:43:36.402744 | orchestrator | 2025-05-13 23:43:36.402748 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2025-05-13 23:43:36.402753 | orchestrator | Tuesday 13 May 2025 23:39:32 +0000 (0:00:02.026) 0:07:23.355 *********** 2025-05-13 23:43:36.402757 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:43:36.402762 | orchestrator | 2025-05-13 23:43:36.402767 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2025-05-13 23:43:36.402772 | orchestrator | Tuesday 13 May 2025 23:39:32 +0000 (0:00:00.618) 0:07:23.973 *********** 2025-05-13 23:43:36.402777 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-cf553414-fd5b-54a4-812a-8e7012220720', 'data_vg': 'ceph-cf553414-fd5b-54a4-812a-8e7012220720'}) 2025-05-13 23:43:36.402783 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-53cfcf66-6862-5829-a71b-dc902cfbd9df', 'data_vg': 'ceph-53cfcf66-6862-5829-a71b-dc902cfbd9df'}) 2025-05-13 23:43:36.402787 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-8f56c737-ae06-5042-be62-d4d7430a3913', 'data_vg': 'ceph-8f56c737-ae06-5042-be62-d4d7430a3913'}) 2025-05-13 23:43:36.402792 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-9ea6307c-c51b-54ed-aeb4-48fe7d66605c', 'data_vg': 'ceph-9ea6307c-c51b-54ed-aeb4-48fe7d66605c'}) 2025-05-13 23:43:36.402802 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-b9ab4848-02bd-5b2a-a6cc-ded55503b6b3', 'data_vg': 'ceph-b9ab4848-02bd-5b2a-a6cc-ded55503b6b3'}) 2025-05-13 23:43:36.402807 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-d153f4c4-5597-54b4-b460-41e490b92c19', 'data_vg': 'ceph-d153f4c4-5597-54b4-b460-41e490b92c19'}) 2025-05-13 23:43:36.402811 | orchestrator | 2025-05-13 23:43:36.402816 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2025-05-13 23:43:36.402821 | orchestrator | Tuesday 13 May 2025 23:40:14 +0000 (0:00:41.742) 0:08:05.716 *********** 2025-05-13 23:43:36.402826 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.402831 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.402835 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.402840 | orchestrator | 2025-05-13 23:43:36.402845 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2025-05-13 23:43:36.402849 | orchestrator | Tuesday 13 May 2025 23:40:15 +0000 (0:00:00.598) 0:08:06.314 *********** 2025-05-13 23:43:36.402854 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:43:36.402859 | orchestrator | 2025-05-13 23:43:36.402867 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2025-05-13 23:43:36.402871 | orchestrator | Tuesday 13 May 2025 23:40:15 +0000 (0:00:00.542) 0:08:06.857 *********** 2025-05-13 23:43:36.402876 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.402881 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.402885 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.402890 | orchestrator | 2025-05-13 23:43:36.402895 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2025-05-13 23:43:36.402899 | orchestrator | Tuesday 13 May 2025 23:40:16 +0000 (0:00:00.660) 0:08:07.518 *********** 2025-05-13 23:43:36.402904 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.402909 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.402914 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.402918 | orchestrator | 2025-05-13 23:43:36.402923 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2025-05-13 23:43:36.402928 | orchestrator | Tuesday 13 May 2025 23:40:19 +0000 (0:00:02.822) 0:08:10.340 *********** 2025-05-13 23:43:36.402932 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:43:36.402937 | orchestrator | 2025-05-13 23:43:36.402942 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2025-05-13 23:43:36.402946 | orchestrator | Tuesday 13 May 2025 23:40:19 +0000 (0:00:00.582) 0:08:10.922 *********** 2025-05-13 23:43:36.402951 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:43:36.402956 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:43:36.402960 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:43:36.402965 | orchestrator | 2025-05-13 23:43:36.402970 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2025-05-13 23:43:36.402974 | orchestrator | Tuesday 13 May 2025 23:40:21 +0000 (0:00:01.151) 0:08:12.074 *********** 2025-05-13 23:43:36.402979 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:43:36.402984 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:43:36.402989 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:43:36.402993 | orchestrator | 2025-05-13 23:43:36.402998 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2025-05-13 23:43:36.403003 | orchestrator | Tuesday 13 May 2025 23:40:22 +0000 (0:00:01.238) 0:08:13.312 *********** 2025-05-13 23:43:36.403007 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:43:36.403012 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:43:36.403017 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:43:36.403021 | orchestrator | 2025-05-13 23:43:36.403026 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2025-05-13 23:43:36.403031 | orchestrator | Tuesday 13 May 2025 23:40:24 +0000 (0:00:01.806) 0:08:15.118 *********** 2025-05-13 23:43:36.403035 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.403040 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.403045 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.403049 | orchestrator | 2025-05-13 23:43:36.403054 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2025-05-13 23:43:36.403059 | orchestrator | Tuesday 13 May 2025 23:40:24 +0000 (0:00:00.305) 0:08:15.424 *********** 2025-05-13 23:43:36.403066 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.403071 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.403076 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.403081 | orchestrator | 2025-05-13 23:43:36.403085 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2025-05-13 23:43:36.403090 | orchestrator | Tuesday 13 May 2025 23:40:24 +0000 (0:00:00.316) 0:08:15.741 *********** 2025-05-13 23:43:36.403095 | orchestrator | ok: [testbed-node-3] => (item=3) 2025-05-13 23:43:36.403099 | orchestrator | ok: [testbed-node-4] => (item=1) 2025-05-13 23:43:36.403104 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-05-13 23:43:36.403109 | orchestrator | ok: [testbed-node-4] => (item=4) 2025-05-13 23:43:36.403113 | orchestrator | ok: [testbed-node-5] => (item=2) 2025-05-13 23:43:36.403122 | orchestrator | ok: [testbed-node-5] => (item=5) 2025-05-13 23:43:36.403127 | orchestrator | 2025-05-13 23:43:36.403132 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2025-05-13 23:43:36.403136 | orchestrator | Tuesday 13 May 2025 23:40:26 +0000 (0:00:01.379) 0:08:17.120 *********** 2025-05-13 23:43:36.403141 | orchestrator | changed: [testbed-node-3] => (item=3) 2025-05-13 23:43:36.403146 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-05-13 23:43:36.403151 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-05-13 23:43:36.403155 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-05-13 23:43:36.403160 | orchestrator | changed: [testbed-node-4] => (item=4) 2025-05-13 23:43:36.403165 | orchestrator | changed: [testbed-node-5] => (item=5) 2025-05-13 23:43:36.403169 | orchestrator | 2025-05-13 23:43:36.403174 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2025-05-13 23:43:36.403179 | orchestrator | Tuesday 13 May 2025 23:40:28 +0000 (0:00:02.117) 0:08:19.238 *********** 2025-05-13 23:43:36.403184 | orchestrator | changed: [testbed-node-3] => (item=3) 2025-05-13 23:43:36.403191 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-05-13 23:43:36.403196 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-05-13 23:43:36.403201 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-05-13 23:43:36.403206 | orchestrator | changed: [testbed-node-4] => (item=4) 2025-05-13 23:43:36.403211 | orchestrator | changed: [testbed-node-5] => (item=5) 2025-05-13 23:43:36.403215 | orchestrator | 2025-05-13 23:43:36.403220 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2025-05-13 23:43:36.403225 | orchestrator | Tuesday 13 May 2025 23:40:32 +0000 (0:00:03.811) 0:08:23.049 *********** 2025-05-13 23:43:36.403229 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.403234 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.403239 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-05-13 23:43:36.403244 | orchestrator | 2025-05-13 23:43:36.403248 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2025-05-13 23:43:36.403253 | orchestrator | Tuesday 13 May 2025 23:40:34 +0000 (0:00:02.345) 0:08:25.394 *********** 2025-05-13 23:43:36.403258 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.403263 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.403267 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2025-05-13 23:43:36.403272 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-05-13 23:43:36.403277 | orchestrator | 2025-05-13 23:43:36.403281 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2025-05-13 23:43:36.403287 | orchestrator | Tuesday 13 May 2025 23:40:47 +0000 (0:00:12.844) 0:08:38.239 *********** 2025-05-13 23:43:36.403295 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.403303 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.403310 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.403318 | orchestrator | 2025-05-13 23:43:36.403325 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-05-13 23:43:36.403333 | orchestrator | Tuesday 13 May 2025 23:40:48 +0000 (0:00:00.847) 0:08:39.086 *********** 2025-05-13 23:43:36.403341 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.403350 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.403357 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.403365 | orchestrator | 2025-05-13 23:43:36.403373 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-05-13 23:43:36.403381 | orchestrator | Tuesday 13 May 2025 23:40:48 +0000 (0:00:00.565) 0:08:39.652 *********** 2025-05-13 23:43:36.403389 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:43:36.403397 | orchestrator | 2025-05-13 23:43:36.403405 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-05-13 23:43:36.403413 | orchestrator | Tuesday 13 May 2025 23:40:49 +0000 (0:00:00.567) 0:08:40.219 *********** 2025-05-13 23:43:36.403429 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-13 23:43:36.403438 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-13 23:43:36.403443 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-13 23:43:36.403448 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.403453 | orchestrator | 2025-05-13 23:43:36.403457 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-05-13 23:43:36.403462 | orchestrator | Tuesday 13 May 2025 23:40:49 +0000 (0:00:00.360) 0:08:40.580 *********** 2025-05-13 23:43:36.403467 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.403471 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.403476 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.403481 | orchestrator | 2025-05-13 23:43:36.403485 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-05-13 23:43:36.403490 | orchestrator | Tuesday 13 May 2025 23:40:49 +0000 (0:00:00.322) 0:08:40.902 *********** 2025-05-13 23:43:36.403495 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.403499 | orchestrator | 2025-05-13 23:43:36.403504 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-05-13 23:43:36.403509 | orchestrator | Tuesday 13 May 2025 23:40:50 +0000 (0:00:00.802) 0:08:41.704 *********** 2025-05-13 23:43:36.403513 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.403522 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.403526 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.403531 | orchestrator | 2025-05-13 23:43:36.403536 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-05-13 23:43:36.403541 | orchestrator | Tuesday 13 May 2025 23:40:51 +0000 (0:00:00.316) 0:08:42.021 *********** 2025-05-13 23:43:36.403545 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.403550 | orchestrator | 2025-05-13 23:43:36.403555 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-05-13 23:43:36.403559 | orchestrator | Tuesday 13 May 2025 23:40:51 +0000 (0:00:00.216) 0:08:42.238 *********** 2025-05-13 23:43:36.403564 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.403569 | orchestrator | 2025-05-13 23:43:36.403574 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-05-13 23:43:36.403578 | orchestrator | Tuesday 13 May 2025 23:40:51 +0000 (0:00:00.215) 0:08:42.453 *********** 2025-05-13 23:43:36.403583 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.403588 | orchestrator | 2025-05-13 23:43:36.403592 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-05-13 23:43:36.403597 | orchestrator | Tuesday 13 May 2025 23:40:51 +0000 (0:00:00.136) 0:08:42.590 *********** 2025-05-13 23:43:36.403602 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.403607 | orchestrator | 2025-05-13 23:43:36.403611 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-05-13 23:43:36.403616 | orchestrator | Tuesday 13 May 2025 23:40:51 +0000 (0:00:00.230) 0:08:42.820 *********** 2025-05-13 23:43:36.403621 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.403626 | orchestrator | 2025-05-13 23:43:36.403630 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-05-13 23:43:36.403635 | orchestrator | Tuesday 13 May 2025 23:40:52 +0000 (0:00:00.215) 0:08:43.035 *********** 2025-05-13 23:43:36.403683 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-13 23:43:36.403688 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-13 23:43:36.403693 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-13 23:43:36.403697 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.403702 | orchestrator | 2025-05-13 23:43:36.403707 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-05-13 23:43:36.403712 | orchestrator | Tuesday 13 May 2025 23:40:52 +0000 (0:00:00.415) 0:08:43.451 *********** 2025-05-13 23:43:36.403716 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.403726 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.403731 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.403736 | orchestrator | 2025-05-13 23:43:36.403740 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-05-13 23:43:36.403745 | orchestrator | Tuesday 13 May 2025 23:40:53 +0000 (0:00:00.579) 0:08:44.030 *********** 2025-05-13 23:43:36.403750 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.403755 | orchestrator | 2025-05-13 23:43:36.403759 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-05-13 23:43:36.403764 | orchestrator | Tuesday 13 May 2025 23:40:53 +0000 (0:00:00.226) 0:08:44.257 *********** 2025-05-13 23:43:36.403769 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.403774 | orchestrator | 2025-05-13 23:43:36.403778 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-05-13 23:43:36.403783 | orchestrator | 2025-05-13 23:43:36.403788 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-05-13 23:43:36.403793 | orchestrator | Tuesday 13 May 2025 23:40:53 +0000 (0:00:00.717) 0:08:44.974 *********** 2025-05-13 23:43:36.403798 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:43:36.403803 | orchestrator | 2025-05-13 23:43:36.403808 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-05-13 23:43:36.403813 | orchestrator | Tuesday 13 May 2025 23:40:55 +0000 (0:00:01.218) 0:08:46.193 *********** 2025-05-13 23:43:36.403818 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:43:36.403823 | orchestrator | 2025-05-13 23:43:36.403828 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-05-13 23:43:36.403832 | orchestrator | Tuesday 13 May 2025 23:40:56 +0000 (0:00:01.196) 0:08:47.389 *********** 2025-05-13 23:43:36.403837 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.403842 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:43:36.403847 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:43:36.403851 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.403856 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.403861 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:43:36.403866 | orchestrator | 2025-05-13 23:43:36.403870 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-05-13 23:43:36.403875 | orchestrator | Tuesday 13 May 2025 23:40:57 +0000 (0:00:00.800) 0:08:48.190 *********** 2025-05-13 23:43:36.403880 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.403885 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.403889 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.403894 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.403899 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.403904 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.403908 | orchestrator | 2025-05-13 23:43:36.403913 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-05-13 23:43:36.403918 | orchestrator | Tuesday 13 May 2025 23:40:58 +0000 (0:00:00.937) 0:08:49.127 *********** 2025-05-13 23:43:36.403922 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.403927 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.403932 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.403937 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.403942 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.403946 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.403951 | orchestrator | 2025-05-13 23:43:36.403959 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-05-13 23:43:36.403964 | orchestrator | Tuesday 13 May 2025 23:40:59 +0000 (0:00:01.115) 0:08:50.242 *********** 2025-05-13 23:43:36.403969 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.403978 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.403983 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.403987 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.403992 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.403997 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.404002 | orchestrator | 2025-05-13 23:43:36.404006 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-05-13 23:43:36.404011 | orchestrator | Tuesday 13 May 2025 23:41:00 +0000 (0:00:01.000) 0:08:51.243 *********** 2025-05-13 23:43:36.404016 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.404021 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.404025 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:43:36.404030 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:43:36.404035 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:43:36.404039 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.404044 | orchestrator | 2025-05-13 23:43:36.404049 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-05-13 23:43:36.404054 | orchestrator | Tuesday 13 May 2025 23:41:01 +0000 (0:00:00.895) 0:08:52.139 *********** 2025-05-13 23:43:36.404058 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.404063 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.404068 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.404073 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.404078 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.404083 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.404087 | orchestrator | 2025-05-13 23:43:36.404092 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-05-13 23:43:36.404097 | orchestrator | Tuesday 13 May 2025 23:41:01 +0000 (0:00:00.606) 0:08:52.745 *********** 2025-05-13 23:43:36.404105 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.404109 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.404114 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.404119 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.404123 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.404128 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.404133 | orchestrator | 2025-05-13 23:43:36.404138 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-05-13 23:43:36.404142 | orchestrator | Tuesday 13 May 2025 23:41:02 +0000 (0:00:00.796) 0:08:53.542 *********** 2025-05-13 23:43:36.404147 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:43:36.404152 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:43:36.404157 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:43:36.404161 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.404166 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.404171 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.404175 | orchestrator | 2025-05-13 23:43:36.404180 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-05-13 23:43:36.404185 | orchestrator | Tuesday 13 May 2025 23:41:03 +0000 (0:00:01.073) 0:08:54.615 *********** 2025-05-13 23:43:36.404190 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:43:36.404194 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:43:36.404199 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:43:36.404204 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.404208 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.404213 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.404218 | orchestrator | 2025-05-13 23:43:36.404223 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-05-13 23:43:36.404227 | orchestrator | Tuesday 13 May 2025 23:41:04 +0000 (0:00:01.264) 0:08:55.880 *********** 2025-05-13 23:43:36.404232 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.404237 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.404242 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.404247 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.404251 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.404256 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.404265 | orchestrator | 2025-05-13 23:43:36.404269 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-05-13 23:43:36.404274 | orchestrator | Tuesday 13 May 2025 23:41:05 +0000 (0:00:00.614) 0:08:56.495 *********** 2025-05-13 23:43:36.404278 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:43:36.404283 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:43:36.404287 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:43:36.404292 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.404296 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.404301 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.404305 | orchestrator | 2025-05-13 23:43:36.404310 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-05-13 23:43:36.404314 | orchestrator | Tuesday 13 May 2025 23:41:06 +0000 (0:00:00.838) 0:08:57.333 *********** 2025-05-13 23:43:36.404319 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.404323 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.404328 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.404332 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.404337 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.404341 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.404346 | orchestrator | 2025-05-13 23:43:36.404350 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-05-13 23:43:36.404355 | orchestrator | Tuesday 13 May 2025 23:41:07 +0000 (0:00:00.721) 0:08:58.054 *********** 2025-05-13 23:43:36.404359 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.404364 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.404368 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.404373 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.404377 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.404382 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.404386 | orchestrator | 2025-05-13 23:43:36.404391 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-05-13 23:43:36.404395 | orchestrator | Tuesday 13 May 2025 23:41:07 +0000 (0:00:00.861) 0:08:58.916 *********** 2025-05-13 23:43:36.404400 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.404404 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.404409 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.404413 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.404418 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.404422 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.404427 | orchestrator | 2025-05-13 23:43:36.404435 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-05-13 23:43:36.404443 | orchestrator | Tuesday 13 May 2025 23:41:08 +0000 (0:00:00.599) 0:08:59.515 *********** 2025-05-13 23:43:36.404452 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.404460 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.404468 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.404476 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.404484 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.404492 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.404501 | orchestrator | 2025-05-13 23:43:36.404509 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-05-13 23:43:36.404517 | orchestrator | Tuesday 13 May 2025 23:41:09 +0000 (0:00:00.839) 0:09:00.354 *********** 2025-05-13 23:43:36.404525 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:43:36.404533 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:43:36.404541 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:43:36.404549 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.404557 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.404562 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.404566 | orchestrator | 2025-05-13 23:43:36.404571 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-05-13 23:43:36.404577 | orchestrator | Tuesday 13 May 2025 23:41:09 +0000 (0:00:00.625) 0:09:00.980 *********** 2025-05-13 23:43:36.404593 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:43:36.404601 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:43:36.404608 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:43:36.404614 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.404619 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.404623 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.404628 | orchestrator | 2025-05-13 23:43:36.404632 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-05-13 23:43:36.404656 | orchestrator | Tuesday 13 May 2025 23:41:10 +0000 (0:00:00.862) 0:09:01.843 *********** 2025-05-13 23:43:36.404662 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:43:36.404666 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:43:36.404671 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:43:36.404675 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.404680 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.404684 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.404688 | orchestrator | 2025-05-13 23:43:36.404693 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-05-13 23:43:36.404698 | orchestrator | Tuesday 13 May 2025 23:41:11 +0000 (0:00:00.618) 0:09:02.461 *********** 2025-05-13 23:43:36.404702 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:43:36.404707 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:43:36.404711 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:43:36.404716 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.404720 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.404725 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.404729 | orchestrator | 2025-05-13 23:43:36.404733 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2025-05-13 23:43:36.404738 | orchestrator | Tuesday 13 May 2025 23:41:12 +0000 (0:00:01.264) 0:09:03.726 *********** 2025-05-13 23:43:36.404742 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:43:36.404747 | orchestrator | 2025-05-13 23:43:36.404752 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2025-05-13 23:43:36.404756 | orchestrator | Tuesday 13 May 2025 23:41:17 +0000 (0:00:04.430) 0:09:08.157 *********** 2025-05-13 23:43:36.404761 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:43:36.404765 | orchestrator | 2025-05-13 23:43:36.404770 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2025-05-13 23:43:36.404774 | orchestrator | Tuesday 13 May 2025 23:41:19 +0000 (0:00:02.118) 0:09:10.276 *********** 2025-05-13 23:43:36.404779 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:43:36.404783 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:43:36.404788 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:43:36.404793 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:43:36.404797 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:43:36.404802 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:43:36.404806 | orchestrator | 2025-05-13 23:43:36.404811 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2025-05-13 23:43:36.404815 | orchestrator | Tuesday 13 May 2025 23:41:20 +0000 (0:00:01.614) 0:09:11.891 *********** 2025-05-13 23:43:36.404820 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:43:36.404824 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:43:36.404829 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:43:36.404833 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:43:36.404837 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:43:36.404842 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:43:36.404846 | orchestrator | 2025-05-13 23:43:36.404851 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2025-05-13 23:43:36.404856 | orchestrator | Tuesday 13 May 2025 23:41:22 +0000 (0:00:01.151) 0:09:13.042 *********** 2025-05-13 23:43:36.404861 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:43:36.404866 | orchestrator | 2025-05-13 23:43:36.404871 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2025-05-13 23:43:36.404895 | orchestrator | Tuesday 13 May 2025 23:41:23 +0000 (0:00:01.234) 0:09:14.277 *********** 2025-05-13 23:43:36.404900 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:43:36.404905 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:43:36.404909 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:43:36.404914 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:43:36.404918 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:43:36.404923 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:43:36.404927 | orchestrator | 2025-05-13 23:43:36.404932 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2025-05-13 23:43:36.404936 | orchestrator | Tuesday 13 May 2025 23:41:25 +0000 (0:00:01.816) 0:09:16.093 *********** 2025-05-13 23:43:36.404941 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:43:36.404945 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:43:36.404950 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:43:36.404955 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:43:36.404959 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:43:36.404964 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:43:36.404968 | orchestrator | 2025-05-13 23:43:36.404976 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2025-05-13 23:43:36.404981 | orchestrator | Tuesday 13 May 2025 23:41:28 +0000 (0:00:03.249) 0:09:19.343 *********** 2025-05-13 23:43:36.404986 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:43:36.404991 | orchestrator | 2025-05-13 23:43:36.404995 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2025-05-13 23:43:36.405000 | orchestrator | Tuesday 13 May 2025 23:41:29 +0000 (0:00:01.290) 0:09:20.633 *********** 2025-05-13 23:43:36.405004 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:43:36.405009 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:43:36.405013 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:43:36.405018 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.405022 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.405027 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.405031 | orchestrator | 2025-05-13 23:43:36.405036 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2025-05-13 23:43:36.405041 | orchestrator | Tuesday 13 May 2025 23:41:30 +0000 (0:00:00.929) 0:09:21.563 *********** 2025-05-13 23:43:36.405045 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:43:36.405050 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:43:36.405054 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:43:36.405059 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:43:36.405063 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:43:36.405068 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:43:36.405072 | orchestrator | 2025-05-13 23:43:36.405077 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2025-05-13 23:43:36.405081 | orchestrator | Tuesday 13 May 2025 23:41:32 +0000 (0:00:02.143) 0:09:23.707 *********** 2025-05-13 23:43:36.405089 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:43:36.405094 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:43:36.405098 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:43:36.405103 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.405107 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.405112 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.405116 | orchestrator | 2025-05-13 23:43:36.405121 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-05-13 23:43:36.405125 | orchestrator | 2025-05-13 23:43:36.405130 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-05-13 23:43:36.405134 | orchestrator | Tuesday 13 May 2025 23:41:33 +0000 (0:00:01.278) 0:09:24.986 *********** 2025-05-13 23:43:36.405139 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:43:36.405148 | orchestrator | 2025-05-13 23:43:36.405152 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-05-13 23:43:36.405157 | orchestrator | Tuesday 13 May 2025 23:41:34 +0000 (0:00:00.519) 0:09:25.505 *********** 2025-05-13 23:43:36.405161 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:43:36.405166 | orchestrator | 2025-05-13 23:43:36.405171 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-05-13 23:43:36.405175 | orchestrator | Tuesday 13 May 2025 23:41:35 +0000 (0:00:00.781) 0:09:26.286 *********** 2025-05-13 23:43:36.405180 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.405184 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.405189 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.405193 | orchestrator | 2025-05-13 23:43:36.405197 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-05-13 23:43:36.405202 | orchestrator | Tuesday 13 May 2025 23:41:35 +0000 (0:00:00.299) 0:09:26.585 *********** 2025-05-13 23:43:36.405206 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.405211 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.405215 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.405220 | orchestrator | 2025-05-13 23:43:36.405224 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-05-13 23:43:36.405229 | orchestrator | Tuesday 13 May 2025 23:41:36 +0000 (0:00:00.701) 0:09:27.287 *********** 2025-05-13 23:43:36.405233 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.405238 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.405242 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.405246 | orchestrator | 2025-05-13 23:43:36.405251 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-05-13 23:43:36.405255 | orchestrator | Tuesday 13 May 2025 23:41:37 +0000 (0:00:00.833) 0:09:28.120 *********** 2025-05-13 23:43:36.405260 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.405264 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.405269 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.405273 | orchestrator | 2025-05-13 23:43:36.405277 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-05-13 23:43:36.405282 | orchestrator | Tuesday 13 May 2025 23:41:37 +0000 (0:00:00.703) 0:09:28.823 *********** 2025-05-13 23:43:36.405286 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.405291 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.405296 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.405300 | orchestrator | 2025-05-13 23:43:36.405305 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-05-13 23:43:36.405309 | orchestrator | Tuesday 13 May 2025 23:41:38 +0000 (0:00:00.306) 0:09:29.129 *********** 2025-05-13 23:43:36.405314 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.405318 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.405323 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.405327 | orchestrator | 2025-05-13 23:43:36.405332 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-05-13 23:43:36.405336 | orchestrator | Tuesday 13 May 2025 23:41:38 +0000 (0:00:00.292) 0:09:29.422 *********** 2025-05-13 23:43:36.405341 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.405345 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.405350 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.405354 | orchestrator | 2025-05-13 23:43:36.405359 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-05-13 23:43:36.405366 | orchestrator | Tuesday 13 May 2025 23:41:39 +0000 (0:00:00.601) 0:09:30.024 *********** 2025-05-13 23:43:36.405371 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.405375 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.405380 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.405384 | orchestrator | 2025-05-13 23:43:36.405389 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-05-13 23:43:36.405393 | orchestrator | Tuesday 13 May 2025 23:41:39 +0000 (0:00:00.720) 0:09:30.744 *********** 2025-05-13 23:43:36.405404 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.405409 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.405413 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.405417 | orchestrator | 2025-05-13 23:43:36.405422 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-05-13 23:43:36.405426 | orchestrator | Tuesday 13 May 2025 23:41:40 +0000 (0:00:00.677) 0:09:31.422 *********** 2025-05-13 23:43:36.405431 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.405436 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.405440 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.405444 | orchestrator | 2025-05-13 23:43:36.405449 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-05-13 23:43:36.405454 | orchestrator | Tuesday 13 May 2025 23:41:40 +0000 (0:00:00.341) 0:09:31.763 *********** 2025-05-13 23:43:36.405458 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.405463 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.405467 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.405471 | orchestrator | 2025-05-13 23:43:36.405476 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-05-13 23:43:36.405480 | orchestrator | Tuesday 13 May 2025 23:41:41 +0000 (0:00:00.584) 0:09:32.348 *********** 2025-05-13 23:43:36.405485 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.405489 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.405494 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.405498 | orchestrator | 2025-05-13 23:43:36.405506 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-05-13 23:43:36.405510 | orchestrator | Tuesday 13 May 2025 23:41:41 +0000 (0:00:00.351) 0:09:32.699 *********** 2025-05-13 23:43:36.405515 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.405520 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.405524 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.405529 | orchestrator | 2025-05-13 23:43:36.405533 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-05-13 23:43:36.405538 | orchestrator | Tuesday 13 May 2025 23:41:42 +0000 (0:00:00.317) 0:09:33.017 *********** 2025-05-13 23:43:36.405542 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.405546 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.405551 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.405555 | orchestrator | 2025-05-13 23:43:36.405560 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-05-13 23:43:36.405564 | orchestrator | Tuesday 13 May 2025 23:41:42 +0000 (0:00:00.346) 0:09:33.364 *********** 2025-05-13 23:43:36.405569 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.405573 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.405578 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.405587 | orchestrator | 2025-05-13 23:43:36.405595 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-05-13 23:43:36.405603 | orchestrator | Tuesday 13 May 2025 23:41:42 +0000 (0:00:00.576) 0:09:33.940 *********** 2025-05-13 23:43:36.405611 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.405619 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.405627 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.405635 | orchestrator | 2025-05-13 23:43:36.405657 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-05-13 23:43:36.405665 | orchestrator | Tuesday 13 May 2025 23:41:43 +0000 (0:00:00.302) 0:09:34.243 *********** 2025-05-13 23:43:36.405672 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.405679 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.405686 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.405694 | orchestrator | 2025-05-13 23:43:36.405701 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-05-13 23:43:36.405709 | orchestrator | Tuesday 13 May 2025 23:41:43 +0000 (0:00:00.318) 0:09:34.561 *********** 2025-05-13 23:43:36.405716 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.405730 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.405737 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.405745 | orchestrator | 2025-05-13 23:43:36.405751 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-05-13 23:43:36.405756 | orchestrator | Tuesday 13 May 2025 23:41:43 +0000 (0:00:00.397) 0:09:34.959 *********** 2025-05-13 23:43:36.405760 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.405765 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.405769 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.405774 | orchestrator | 2025-05-13 23:43:36.405778 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2025-05-13 23:43:36.405782 | orchestrator | Tuesday 13 May 2025 23:41:44 +0000 (0:00:00.837) 0:09:35.796 *********** 2025-05-13 23:43:36.405787 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.405791 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.405796 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-05-13 23:43:36.405800 | orchestrator | 2025-05-13 23:43:36.405805 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2025-05-13 23:43:36.405809 | orchestrator | Tuesday 13 May 2025 23:41:45 +0000 (0:00:00.447) 0:09:36.243 *********** 2025-05-13 23:43:36.405814 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-13 23:43:36.405818 | orchestrator | 2025-05-13 23:43:36.405823 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2025-05-13 23:43:36.405827 | orchestrator | Tuesday 13 May 2025 23:41:47 +0000 (0:00:02.135) 0:09:38.379 *********** 2025-05-13 23:43:36.405833 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-05-13 23:43:36.405839 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.405844 | orchestrator | 2025-05-13 23:43:36.405852 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2025-05-13 23:43:36.405856 | orchestrator | Tuesday 13 May 2025 23:41:47 +0000 (0:00:00.597) 0:09:38.976 *********** 2025-05-13 23:43:36.405862 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-13 23:43:36.405871 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-13 23:43:36.405876 | orchestrator | 2025-05-13 23:43:36.405881 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2025-05-13 23:43:36.405885 | orchestrator | Tuesday 13 May 2025 23:41:56 +0000 (0:00:08.291) 0:09:47.267 *********** 2025-05-13 23:43:36.405889 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-13 23:43:36.405894 | orchestrator | 2025-05-13 23:43:36.405898 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2025-05-13 23:43:36.405903 | orchestrator | Tuesday 13 May 2025 23:41:59 +0000 (0:00:03.689) 0:09:50.957 *********** 2025-05-13 23:43:36.405907 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:43:36.405912 | orchestrator | 2025-05-13 23:43:36.405921 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2025-05-13 23:43:36.405925 | orchestrator | Tuesday 13 May 2025 23:42:00 +0000 (0:00:00.828) 0:09:51.785 *********** 2025-05-13 23:43:36.405930 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-05-13 23:43:36.405934 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-05-13 23:43:36.405943 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-05-13 23:43:36.405948 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-05-13 23:43:36.405956 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-05-13 23:43:36.405965 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-05-13 23:43:36.405978 | orchestrator | 2025-05-13 23:43:36.405985 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2025-05-13 23:43:36.405991 | orchestrator | Tuesday 13 May 2025 23:42:01 +0000 (0:00:01.132) 0:09:52.918 *********** 2025-05-13 23:43:36.405999 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-13 23:43:36.406005 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-13 23:43:36.406044 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-13 23:43:36.406054 | orchestrator | 2025-05-13 23:43:36.406062 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2025-05-13 23:43:36.406070 | orchestrator | Tuesday 13 May 2025 23:42:04 +0000 (0:00:03.048) 0:09:55.966 *********** 2025-05-13 23:43:36.406077 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-13 23:43:36.406085 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-13 23:43:36.406092 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:43:36.406100 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-13 23:43:36.406104 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-13 23:43:36.406109 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:43:36.406114 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-13 23:43:36.406118 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-13 23:43:36.406122 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:43:36.406127 | orchestrator | 2025-05-13 23:43:36.406131 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2025-05-13 23:43:36.406136 | orchestrator | Tuesday 13 May 2025 23:42:06 +0000 (0:00:01.272) 0:09:57.239 *********** 2025-05-13 23:43:36.406140 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:43:36.406145 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:43:36.406149 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:43:36.406154 | orchestrator | 2025-05-13 23:43:36.406158 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2025-05-13 23:43:36.406163 | orchestrator | Tuesday 13 May 2025 23:42:09 +0000 (0:00:02.794) 0:10:00.034 *********** 2025-05-13 23:43:36.406167 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.406172 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.406176 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.406181 | orchestrator | 2025-05-13 23:43:36.406185 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2025-05-13 23:43:36.406190 | orchestrator | Tuesday 13 May 2025 23:42:09 +0000 (0:00:00.498) 0:10:00.532 *********** 2025-05-13 23:43:36.406194 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:43:36.406198 | orchestrator | 2025-05-13 23:43:36.406203 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2025-05-13 23:43:36.406207 | orchestrator | Tuesday 13 May 2025 23:42:10 +0000 (0:00:00.716) 0:10:01.248 *********** 2025-05-13 23:43:36.406212 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:43:36.406216 | orchestrator | 2025-05-13 23:43:36.406221 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2025-05-13 23:43:36.406225 | orchestrator | Tuesday 13 May 2025 23:42:10 +0000 (0:00:00.530) 0:10:01.778 *********** 2025-05-13 23:43:36.406234 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:43:36.406238 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:43:36.406243 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:43:36.406247 | orchestrator | 2025-05-13 23:43:36.406257 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2025-05-13 23:43:36.406262 | orchestrator | Tuesday 13 May 2025 23:42:12 +0000 (0:00:01.710) 0:10:03.489 *********** 2025-05-13 23:43:36.406266 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:43:36.406270 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:43:36.406275 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:43:36.406279 | orchestrator | 2025-05-13 23:43:36.406284 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2025-05-13 23:43:36.406288 | orchestrator | Tuesday 13 May 2025 23:42:13 +0000 (0:00:01.458) 0:10:04.948 *********** 2025-05-13 23:43:36.406293 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:43:36.406297 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:43:36.406302 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:43:36.406307 | orchestrator | 2025-05-13 23:43:36.406311 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2025-05-13 23:43:36.406315 | orchestrator | Tuesday 13 May 2025 23:42:16 +0000 (0:00:02.078) 0:10:07.026 *********** 2025-05-13 23:43:36.406320 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:43:36.406325 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:43:36.406329 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:43:36.406333 | orchestrator | 2025-05-13 23:43:36.406338 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2025-05-13 23:43:36.406342 | orchestrator | Tuesday 13 May 2025 23:42:18 +0000 (0:00:02.058) 0:10:09.084 *********** 2025-05-13 23:43:36.406347 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.406351 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.406356 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.406361 | orchestrator | 2025-05-13 23:43:36.406370 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-05-13 23:43:36.406375 | orchestrator | Tuesday 13 May 2025 23:42:19 +0000 (0:00:01.474) 0:10:10.558 *********** 2025-05-13 23:43:36.406379 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:43:36.406384 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:43:36.406388 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:43:36.406393 | orchestrator | 2025-05-13 23:43:36.406397 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-05-13 23:43:36.406402 | orchestrator | Tuesday 13 May 2025 23:42:20 +0000 (0:00:00.722) 0:10:11.281 *********** 2025-05-13 23:43:36.406406 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:43:36.406411 | orchestrator | 2025-05-13 23:43:36.406415 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-05-13 23:43:36.406419 | orchestrator | Tuesday 13 May 2025 23:42:21 +0000 (0:00:00.898) 0:10:12.180 *********** 2025-05-13 23:43:36.406424 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.406429 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.406433 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.406438 | orchestrator | 2025-05-13 23:43:36.406442 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-05-13 23:43:36.406447 | orchestrator | Tuesday 13 May 2025 23:42:21 +0000 (0:00:00.331) 0:10:12.512 *********** 2025-05-13 23:43:36.406451 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:43:36.406455 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:43:36.406460 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:43:36.406465 | orchestrator | 2025-05-13 23:43:36.406469 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-05-13 23:43:36.406474 | orchestrator | Tuesday 13 May 2025 23:42:22 +0000 (0:00:01.273) 0:10:13.785 *********** 2025-05-13 23:43:36.406478 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-13 23:43:36.406483 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-13 23:43:36.406487 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-13 23:43:36.406492 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.406496 | orchestrator | 2025-05-13 23:43:36.406505 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-05-13 23:43:36.406509 | orchestrator | Tuesday 13 May 2025 23:42:23 +0000 (0:00:00.923) 0:10:14.709 *********** 2025-05-13 23:43:36.406514 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.406518 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.406523 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.406527 | orchestrator | 2025-05-13 23:43:36.406532 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-05-13 23:43:36.406536 | orchestrator | 2025-05-13 23:43:36.406541 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-05-13 23:43:36.406545 | orchestrator | Tuesday 13 May 2025 23:42:24 +0000 (0:00:00.847) 0:10:15.557 *********** 2025-05-13 23:43:36.406549 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:43:36.406554 | orchestrator | 2025-05-13 23:43:36.406559 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-05-13 23:43:36.406563 | orchestrator | Tuesday 13 May 2025 23:42:25 +0000 (0:00:00.474) 0:10:16.032 *********** 2025-05-13 23:43:36.406567 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:43:36.406572 | orchestrator | 2025-05-13 23:43:36.406577 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-05-13 23:43:36.406581 | orchestrator | Tuesday 13 May 2025 23:42:25 +0000 (0:00:00.717) 0:10:16.749 *********** 2025-05-13 23:43:36.406586 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.406590 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.406594 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.406599 | orchestrator | 2025-05-13 23:43:36.406603 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-05-13 23:43:36.406608 | orchestrator | Tuesday 13 May 2025 23:42:26 +0000 (0:00:00.335) 0:10:17.085 *********** 2025-05-13 23:43:36.406612 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.406617 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.406624 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.406629 | orchestrator | 2025-05-13 23:43:36.406633 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-05-13 23:43:36.406674 | orchestrator | Tuesday 13 May 2025 23:42:26 +0000 (0:00:00.731) 0:10:17.817 *********** 2025-05-13 23:43:36.406680 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.406684 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.406689 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.406693 | orchestrator | 2025-05-13 23:43:36.406698 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-05-13 23:43:36.406703 | orchestrator | Tuesday 13 May 2025 23:42:27 +0000 (0:00:00.939) 0:10:18.757 *********** 2025-05-13 23:43:36.406707 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.406712 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.406716 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.406721 | orchestrator | 2025-05-13 23:43:36.406725 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-05-13 23:43:36.406730 | orchestrator | Tuesday 13 May 2025 23:42:28 +0000 (0:00:00.754) 0:10:19.511 *********** 2025-05-13 23:43:36.406734 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.406739 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.406744 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.406748 | orchestrator | 2025-05-13 23:43:36.406756 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-05-13 23:43:36.406764 | orchestrator | Tuesday 13 May 2025 23:42:28 +0000 (0:00:00.309) 0:10:19.821 *********** 2025-05-13 23:43:36.406778 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.406786 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.406794 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.406802 | orchestrator | 2025-05-13 23:43:36.406809 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-05-13 23:43:36.406830 | orchestrator | Tuesday 13 May 2025 23:42:29 +0000 (0:00:00.291) 0:10:20.112 *********** 2025-05-13 23:43:36.406838 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.406846 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.406854 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.406862 | orchestrator | 2025-05-13 23:43:36.406870 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-05-13 23:43:36.406878 | orchestrator | Tuesday 13 May 2025 23:42:29 +0000 (0:00:00.548) 0:10:20.660 *********** 2025-05-13 23:43:36.406886 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.406894 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.406901 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.406910 | orchestrator | 2025-05-13 23:43:36.406918 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-05-13 23:43:36.406926 | orchestrator | Tuesday 13 May 2025 23:42:30 +0000 (0:00:00.712) 0:10:21.372 *********** 2025-05-13 23:43:36.406935 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.406943 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.406951 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.406957 | orchestrator | 2025-05-13 23:43:36.406962 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-05-13 23:43:36.406967 | orchestrator | Tuesday 13 May 2025 23:42:31 +0000 (0:00:00.786) 0:10:22.158 *********** 2025-05-13 23:43:36.406971 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.406975 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.406980 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.406985 | orchestrator | 2025-05-13 23:43:36.406989 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-05-13 23:43:36.406994 | orchestrator | Tuesday 13 May 2025 23:42:31 +0000 (0:00:00.308) 0:10:22.467 *********** 2025-05-13 23:43:36.406998 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.407003 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.407008 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.407012 | orchestrator | 2025-05-13 23:43:36.407016 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-05-13 23:43:36.407021 | orchestrator | Tuesday 13 May 2025 23:42:32 +0000 (0:00:00.608) 0:10:23.076 *********** 2025-05-13 23:43:36.407026 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.407030 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.407035 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.407039 | orchestrator | 2025-05-13 23:43:36.407043 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-05-13 23:43:36.407048 | orchestrator | Tuesday 13 May 2025 23:42:32 +0000 (0:00:00.368) 0:10:23.444 *********** 2025-05-13 23:43:36.407052 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.407057 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.407062 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.407066 | orchestrator | 2025-05-13 23:43:36.407071 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-05-13 23:43:36.407075 | orchestrator | Tuesday 13 May 2025 23:42:32 +0000 (0:00:00.337) 0:10:23.781 *********** 2025-05-13 23:43:36.407080 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.407084 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.407089 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.407093 | orchestrator | 2025-05-13 23:43:36.407097 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-05-13 23:43:36.407102 | orchestrator | Tuesday 13 May 2025 23:42:33 +0000 (0:00:00.373) 0:10:24.154 *********** 2025-05-13 23:43:36.407106 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.407111 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.407115 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.407120 | orchestrator | 2025-05-13 23:43:36.407124 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-05-13 23:43:36.407129 | orchestrator | Tuesday 13 May 2025 23:42:33 +0000 (0:00:00.594) 0:10:24.749 *********** 2025-05-13 23:43:36.407139 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.407144 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.407148 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.407152 | orchestrator | 2025-05-13 23:43:36.407157 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-05-13 23:43:36.407162 | orchestrator | Tuesday 13 May 2025 23:42:34 +0000 (0:00:00.289) 0:10:25.038 *********** 2025-05-13 23:43:36.407166 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.407171 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.407175 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.407180 | orchestrator | 2025-05-13 23:43:36.407191 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-05-13 23:43:36.407196 | orchestrator | Tuesday 13 May 2025 23:42:34 +0000 (0:00:00.272) 0:10:25.311 *********** 2025-05-13 23:43:36.407201 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.407205 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.407210 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.407214 | orchestrator | 2025-05-13 23:43:36.407218 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-05-13 23:43:36.407222 | orchestrator | Tuesday 13 May 2025 23:42:34 +0000 (0:00:00.300) 0:10:25.612 *********** 2025-05-13 23:43:36.407226 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.407230 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.407234 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.407239 | orchestrator | 2025-05-13 23:43:36.407243 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2025-05-13 23:43:36.407247 | orchestrator | Tuesday 13 May 2025 23:42:35 +0000 (0:00:00.671) 0:10:26.283 *********** 2025-05-13 23:43:36.407251 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:43:36.407255 | orchestrator | 2025-05-13 23:43:36.407259 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-05-13 23:43:36.407263 | orchestrator | Tuesday 13 May 2025 23:42:35 +0000 (0:00:00.490) 0:10:26.774 *********** 2025-05-13 23:43:36.407267 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-13 23:43:36.407271 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-13 23:43:36.407275 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-13 23:43:36.407280 | orchestrator | 2025-05-13 23:43:36.407284 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-05-13 23:43:36.407293 | orchestrator | Tuesday 13 May 2025 23:42:37 +0000 (0:00:02.192) 0:10:28.966 *********** 2025-05-13 23:43:36.407297 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-13 23:43:36.407301 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-13 23:43:36.407305 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:43:36.407309 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-13 23:43:36.407314 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-13 23:43:36.407318 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:43:36.407322 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-13 23:43:36.407326 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-13 23:43:36.407330 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:43:36.407334 | orchestrator | 2025-05-13 23:43:36.407338 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2025-05-13 23:43:36.407342 | orchestrator | Tuesday 13 May 2025 23:42:39 +0000 (0:00:01.508) 0:10:30.475 *********** 2025-05-13 23:43:36.407346 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.407350 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.407355 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.407359 | orchestrator | 2025-05-13 23:43:36.407363 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2025-05-13 23:43:36.407367 | orchestrator | Tuesday 13 May 2025 23:42:39 +0000 (0:00:00.324) 0:10:30.799 *********** 2025-05-13 23:43:36.407374 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:43:36.407379 | orchestrator | 2025-05-13 23:43:36.407383 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2025-05-13 23:43:36.407387 | orchestrator | Tuesday 13 May 2025 23:42:40 +0000 (0:00:00.543) 0:10:31.343 *********** 2025-05-13 23:43:36.407391 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-05-13 23:43:36.407396 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-05-13 23:43:36.407401 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-05-13 23:43:36.407405 | orchestrator | 2025-05-13 23:43:36.407409 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2025-05-13 23:43:36.407413 | orchestrator | Tuesday 13 May 2025 23:42:41 +0000 (0:00:01.056) 0:10:32.399 *********** 2025-05-13 23:43:36.407417 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-13 23:43:36.407421 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-05-13 23:43:36.407425 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-13 23:43:36.407429 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-05-13 23:43:36.407434 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-13 23:43:36.407438 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-05-13 23:43:36.407442 | orchestrator | 2025-05-13 23:43:36.407446 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-05-13 23:43:36.407450 | orchestrator | Tuesday 13 May 2025 23:42:45 +0000 (0:00:04.418) 0:10:36.818 *********** 2025-05-13 23:43:36.407454 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-13 23:43:36.407461 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-13 23:43:36.407466 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-13 23:43:36.407470 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-13 23:43:36.407474 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-13 23:43:36.407478 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-13 23:43:36.407482 | orchestrator | 2025-05-13 23:43:36.407486 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-05-13 23:43:36.407490 | orchestrator | Tuesday 13 May 2025 23:42:48 +0000 (0:00:02.391) 0:10:39.209 *********** 2025-05-13 23:43:36.407494 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-13 23:43:36.407498 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:43:36.407502 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-13 23:43:36.407507 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:43:36.407511 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-13 23:43:36.407515 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:43:36.407519 | orchestrator | 2025-05-13 23:43:36.407523 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2025-05-13 23:43:36.407527 | orchestrator | Tuesday 13 May 2025 23:42:49 +0000 (0:00:01.250) 0:10:40.460 *********** 2025-05-13 23:43:36.407531 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-05-13 23:43:36.407535 | orchestrator | 2025-05-13 23:43:36.407543 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2025-05-13 23:43:36.407547 | orchestrator | Tuesday 13 May 2025 23:42:49 +0000 (0:00:00.450) 0:10:40.910 *********** 2025-05-13 23:43:36.407572 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-13 23:43:36.407577 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-13 23:43:36.407581 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-13 23:43:36.407585 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-13 23:43:36.407590 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-13 23:43:36.407594 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.407598 | orchestrator | 2025-05-13 23:43:36.407602 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2025-05-13 23:43:36.407607 | orchestrator | Tuesday 13 May 2025 23:42:50 +0000 (0:00:00.650) 0:10:41.561 *********** 2025-05-13 23:43:36.407611 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-13 23:43:36.407615 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-13 23:43:36.407619 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-13 23:43:36.407624 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-13 23:43:36.407628 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-13 23:43:36.407632 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.407650 | orchestrator | 2025-05-13 23:43:36.407655 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2025-05-13 23:43:36.407659 | orchestrator | Tuesday 13 May 2025 23:42:51 +0000 (0:00:00.591) 0:10:42.153 *********** 2025-05-13 23:43:36.407663 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-13 23:43:36.407668 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-13 23:43:36.407672 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-13 23:43:36.407676 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-13 23:43:36.407680 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-13 23:43:36.407684 | orchestrator | 2025-05-13 23:43:36.407688 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2025-05-13 23:43:36.407693 | orchestrator | Tuesday 13 May 2025 23:43:22 +0000 (0:00:31.258) 0:11:13.412 *********** 2025-05-13 23:43:36.407697 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.407701 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.407705 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.407709 | orchestrator | 2025-05-13 23:43:36.407716 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2025-05-13 23:43:36.407724 | orchestrator | Tuesday 13 May 2025 23:43:22 +0000 (0:00:00.342) 0:11:13.754 *********** 2025-05-13 23:43:36.407728 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.407732 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.407736 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.407740 | orchestrator | 2025-05-13 23:43:36.407744 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2025-05-13 23:43:36.407748 | orchestrator | Tuesday 13 May 2025 23:43:23 +0000 (0:00:00.328) 0:11:14.082 *********** 2025-05-13 23:43:36.407753 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:43:36.407757 | orchestrator | 2025-05-13 23:43:36.407761 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2025-05-13 23:43:36.407765 | orchestrator | Tuesday 13 May 2025 23:43:23 +0000 (0:00:00.881) 0:11:14.964 *********** 2025-05-13 23:43:36.407769 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:43:36.407773 | orchestrator | 2025-05-13 23:43:36.407777 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2025-05-13 23:43:36.407782 | orchestrator | Tuesday 13 May 2025 23:43:24 +0000 (0:00:00.533) 0:11:15.497 *********** 2025-05-13 23:43:36.407786 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:43:36.407790 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:43:36.407794 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:43:36.407798 | orchestrator | 2025-05-13 23:43:36.407802 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2025-05-13 23:43:36.407808 | orchestrator | Tuesday 13 May 2025 23:43:25 +0000 (0:00:01.328) 0:11:16.825 *********** 2025-05-13 23:43:36.407819 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:43:36.407825 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:43:36.407831 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:43:36.407838 | orchestrator | 2025-05-13 23:43:36.407844 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2025-05-13 23:43:36.407850 | orchestrator | Tuesday 13 May 2025 23:43:27 +0000 (0:00:01.468) 0:11:18.294 *********** 2025-05-13 23:43:36.407855 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:43:36.407861 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:43:36.407867 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:43:36.407873 | orchestrator | 2025-05-13 23:43:36.407880 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2025-05-13 23:43:36.407886 | orchestrator | Tuesday 13 May 2025 23:43:29 +0000 (0:00:01.959) 0:11:20.254 *********** 2025-05-13 23:43:36.407893 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-05-13 23:43:36.407900 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-05-13 23:43:36.407907 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-05-13 23:43:36.407914 | orchestrator | 2025-05-13 23:43:36.407919 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-05-13 23:43:36.407923 | orchestrator | Tuesday 13 May 2025 23:43:31 +0000 (0:00:02.712) 0:11:22.966 *********** 2025-05-13 23:43:36.407927 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.407931 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.407935 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.407939 | orchestrator | 2025-05-13 23:43:36.407943 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-05-13 23:43:36.407949 | orchestrator | Tuesday 13 May 2025 23:43:32 +0000 (0:00:00.350) 0:11:23.317 *********** 2025-05-13 23:43:36.407956 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:43:36.407963 | orchestrator | 2025-05-13 23:43:36.407974 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-05-13 23:43:36.407981 | orchestrator | Tuesday 13 May 2025 23:43:32 +0000 (0:00:00.512) 0:11:23.829 *********** 2025-05-13 23:43:36.407987 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.407993 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.408000 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.408007 | orchestrator | 2025-05-13 23:43:36.408013 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-05-13 23:43:36.408020 | orchestrator | Tuesday 13 May 2025 23:43:33 +0000 (0:00:00.641) 0:11:24.470 *********** 2025-05-13 23:43:36.408027 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.408033 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:43:36.408040 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:43:36.408046 | orchestrator | 2025-05-13 23:43:36.408053 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-05-13 23:43:36.408060 | orchestrator | Tuesday 13 May 2025 23:43:33 +0000 (0:00:00.356) 0:11:24.827 *********** 2025-05-13 23:43:36.408067 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-13 23:43:36.408073 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-13 23:43:36.408081 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-13 23:43:36.408086 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:43:36.408090 | orchestrator | 2025-05-13 23:43:36.408094 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-05-13 23:43:36.408098 | orchestrator | Tuesday 13 May 2025 23:43:34 +0000 (0:00:00.586) 0:11:25.413 *********** 2025-05-13 23:43:36.408102 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:43:36.408106 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:43:36.408110 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:43:36.408114 | orchestrator | 2025-05-13 23:43:36.408118 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 23:43:36.408126 | orchestrator | testbed-node-0 : ok=141  changed=36  unreachable=0 failed=0 skipped=135  rescued=0 ignored=0 2025-05-13 23:43:36.408131 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2025-05-13 23:43:36.408135 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2025-05-13 23:43:36.408139 | orchestrator | testbed-node-3 : ok=186  changed=44  unreachable=0 failed=0 skipped=152  rescued=0 ignored=0 2025-05-13 23:43:36.408143 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2025-05-13 23:43:36.408147 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2025-05-13 23:43:36.408152 | orchestrator | 2025-05-13 23:43:36.408156 | orchestrator | 2025-05-13 23:43:36.408160 | orchestrator | 2025-05-13 23:43:36.408164 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 23:43:36.408168 | orchestrator | Tuesday 13 May 2025 23:43:34 +0000 (0:00:00.249) 0:11:25.663 *********** 2025-05-13 23:43:36.408172 | orchestrator | =============================================================================== 2025-05-13 23:43:36.408176 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 75.14s 2025-05-13 23:43:36.408185 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 41.74s 2025-05-13 23:43:36.408189 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.26s 2025-05-13 23:43:36.408193 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 24.39s 2025-05-13 23:43:36.408197 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.82s 2025-05-13 23:43:36.408206 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 14.62s 2025-05-13 23:43:36.408210 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.84s 2025-05-13 23:43:36.408214 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.91s 2025-05-13 23:43:36.408218 | orchestrator | ceph-mon : Fetch ceph initial keys ------------------------------------- 10.17s 2025-05-13 23:43:36.408222 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.29s 2025-05-13 23:43:36.408226 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.75s 2025-05-13 23:43:36.408230 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.41s 2025-05-13 23:43:36.408234 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.93s 2025-05-13 23:43:36.408238 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.43s 2025-05-13 23:43:36.408242 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.42s 2025-05-13 23:43:36.408246 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.81s 2025-05-13 23:43:36.408250 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.69s 2025-05-13 23:43:36.408254 | orchestrator | ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created --- 3.53s 2025-05-13 23:43:36.408258 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.36s 2025-05-13 23:43:36.408262 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.25s 2025-05-13 23:43:36.408266 | orchestrator | 2025-05-13 23:43:36 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:43:36.408271 | orchestrator | 2025-05-13 23:43:36 | INFO  | Task d4c4b1b4-cd92-4f0c-b208-9898dab4a4b8 is in state STARTED 2025-05-13 23:43:36.408275 | orchestrator | 2025-05-13 23:43:36 | INFO  | Task 431a9fc2-c86e-4eb3-8b59-dfef1748524e is in state STARTED 2025-05-13 23:43:36.408279 | orchestrator | 2025-05-13 23:43:36 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:43:39.429102 | orchestrator | 2025-05-13 23:43:39 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:43:39.430408 | orchestrator | 2025-05-13 23:43:39 | INFO  | Task d4c4b1b4-cd92-4f0c-b208-9898dab4a4b8 is in state STARTED 2025-05-13 23:43:39.431999 | orchestrator | 2025-05-13 23:43:39 | INFO  | Task 431a9fc2-c86e-4eb3-8b59-dfef1748524e is in state STARTED 2025-05-13 23:43:39.432401 | orchestrator | 2025-05-13 23:43:39 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:43:42.490275 | orchestrator | 2025-05-13 23:43:42 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:43:42.492329 | orchestrator | 2025-05-13 23:43:42 | INFO  | Task d4c4b1b4-cd92-4f0c-b208-9898dab4a4b8 is in state STARTED 2025-05-13 23:43:42.495193 | orchestrator | 2025-05-13 23:43:42 | INFO  | Task 431a9fc2-c86e-4eb3-8b59-dfef1748524e is in state STARTED 2025-05-13 23:43:42.496985 | orchestrator | 2025-05-13 23:43:42 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:43:45.552803 | orchestrator | 2025-05-13 23:43:45 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:43:45.554202 | orchestrator | 2025-05-13 23:43:45 | INFO  | Task d4c4b1b4-cd92-4f0c-b208-9898dab4a4b8 is in state STARTED 2025-05-13 23:43:45.556092 | orchestrator | 2025-05-13 23:43:45 | INFO  | Task 431a9fc2-c86e-4eb3-8b59-dfef1748524e is in state STARTED 2025-05-13 23:43:45.556161 | orchestrator | 2025-05-13 23:43:45 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:43:48.601896 | orchestrator | 2025-05-13 23:43:48 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:43:48.606518 | orchestrator | 2025-05-13 23:43:48 | INFO  | Task d4c4b1b4-cd92-4f0c-b208-9898dab4a4b8 is in state STARTED 2025-05-13 23:43:48.609105 | orchestrator | 2025-05-13 23:43:48 | INFO  | Task 431a9fc2-c86e-4eb3-8b59-dfef1748524e is in state STARTED 2025-05-13 23:43:48.609768 | orchestrator | 2025-05-13 23:43:48 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:43:51.664428 | orchestrator | 2025-05-13 23:43:51 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:43:51.667164 | orchestrator | 2025-05-13 23:43:51 | INFO  | Task d4c4b1b4-cd92-4f0c-b208-9898dab4a4b8 is in state STARTED 2025-05-13 23:43:51.670173 | orchestrator | 2025-05-13 23:43:51 | INFO  | Task 431a9fc2-c86e-4eb3-8b59-dfef1748524e is in state STARTED 2025-05-13 23:43:51.670238 | orchestrator | 2025-05-13 23:43:51 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:43:54.719242 | orchestrator | 2025-05-13 23:43:54 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:43:54.723148 | orchestrator | 2025-05-13 23:43:54 | INFO  | Task d4c4b1b4-cd92-4f0c-b208-9898dab4a4b8 is in state STARTED 2025-05-13 23:43:54.723995 | orchestrator | 2025-05-13 23:43:54 | INFO  | Task 431a9fc2-c86e-4eb3-8b59-dfef1748524e is in state STARTED 2025-05-13 23:43:54.724009 | orchestrator | 2025-05-13 23:43:54 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:43:57.774721 | orchestrator | 2025-05-13 23:43:57 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:43:57.779683 | orchestrator | 2025-05-13 23:43:57 | INFO  | Task d4c4b1b4-cd92-4f0c-b208-9898dab4a4b8 is in state STARTED 2025-05-13 23:43:57.781943 | orchestrator | 2025-05-13 23:43:57 | INFO  | Task 431a9fc2-c86e-4eb3-8b59-dfef1748524e is in state STARTED 2025-05-13 23:43:57.781971 | orchestrator | 2025-05-13 23:43:57 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:44:00.836561 | orchestrator | 2025-05-13 23:44:00 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:44:00.836713 | orchestrator | 2025-05-13 23:44:00 | INFO  | Task d4c4b1b4-cd92-4f0c-b208-9898dab4a4b8 is in state STARTED 2025-05-13 23:44:00.836732 | orchestrator | 2025-05-13 23:44:00 | INFO  | Task 431a9fc2-c86e-4eb3-8b59-dfef1748524e is in state STARTED 2025-05-13 23:44:00.836744 | orchestrator | 2025-05-13 23:44:00 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:44:03.893653 | orchestrator | 2025-05-13 23:44:03 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:44:03.894876 | orchestrator | 2025-05-13 23:44:03 | INFO  | Task d4c4b1b4-cd92-4f0c-b208-9898dab4a4b8 is in state STARTED 2025-05-13 23:44:03.897560 | orchestrator | 2025-05-13 23:44:03 | INFO  | Task 431a9fc2-c86e-4eb3-8b59-dfef1748524e is in state STARTED 2025-05-13 23:44:03.897598 | orchestrator | 2025-05-13 23:44:03 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:44:06.951075 | orchestrator | 2025-05-13 23:44:06 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:44:06.952695 | orchestrator | 2025-05-13 23:44:06 | INFO  | Task d4c4b1b4-cd92-4f0c-b208-9898dab4a4b8 is in state STARTED 2025-05-13 23:44:06.954232 | orchestrator | 2025-05-13 23:44:06 | INFO  | Task 431a9fc2-c86e-4eb3-8b59-dfef1748524e is in state STARTED 2025-05-13 23:44:06.955177 | orchestrator | 2025-05-13 23:44:06 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:44:10.020942 | orchestrator | 2025-05-13 23:44:10 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:44:10.021947 | orchestrator | 2025-05-13 23:44:10 | INFO  | Task d4c4b1b4-cd92-4f0c-b208-9898dab4a4b8 is in state STARTED 2025-05-13 23:44:10.024574 | orchestrator | 2025-05-13 23:44:10 | INFO  | Task 431a9fc2-c86e-4eb3-8b59-dfef1748524e is in state STARTED 2025-05-13 23:44:10.024665 | orchestrator | 2025-05-13 23:44:10 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:44:13.071991 | orchestrator | 2025-05-13 23:44:13 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:44:13.072403 | orchestrator | 2025-05-13 23:44:13 | INFO  | Task d4c4b1b4-cd92-4f0c-b208-9898dab4a4b8 is in state STARTED 2025-05-13 23:44:13.073443 | orchestrator | 2025-05-13 23:44:13 | INFO  | Task 431a9fc2-c86e-4eb3-8b59-dfef1748524e is in state STARTED 2025-05-13 23:44:13.073476 | orchestrator | 2025-05-13 23:44:13 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:44:16.132401 | orchestrator | 2025-05-13 23:44:16 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:44:16.134780 | orchestrator | 2025-05-13 23:44:16 | INFO  | Task d4c4b1b4-cd92-4f0c-b208-9898dab4a4b8 is in state STARTED 2025-05-13 23:44:16.136046 | orchestrator | 2025-05-13 23:44:16 | INFO  | Task 431a9fc2-c86e-4eb3-8b59-dfef1748524e is in state STARTED 2025-05-13 23:44:16.136084 | orchestrator | 2025-05-13 23:44:16 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:44:19.191128 | orchestrator | 2025-05-13 23:44:19 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:44:19.192392 | orchestrator | 2025-05-13 23:44:19 | INFO  | Task d4c4b1b4-cd92-4f0c-b208-9898dab4a4b8 is in state STARTED 2025-05-13 23:44:19.194073 | orchestrator | 2025-05-13 23:44:19 | INFO  | Task 431a9fc2-c86e-4eb3-8b59-dfef1748524e is in state STARTED 2025-05-13 23:44:19.194127 | orchestrator | 2025-05-13 23:44:19 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:44:22.261277 | orchestrator | 2025-05-13 23:44:22 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:44:22.263457 | orchestrator | 2025-05-13 23:44:22 | INFO  | Task d4c4b1b4-cd92-4f0c-b208-9898dab4a4b8 is in state STARTED 2025-05-13 23:44:22.265705 | orchestrator | 2025-05-13 23:44:22 | INFO  | Task 431a9fc2-c86e-4eb3-8b59-dfef1748524e is in state STARTED 2025-05-13 23:44:22.265758 | orchestrator | 2025-05-13 23:44:22 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:44:25.320025 | orchestrator | 2025-05-13 23:44:25 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:44:25.320578 | orchestrator | 2025-05-13 23:44:25 | INFO  | Task d4c4b1b4-cd92-4f0c-b208-9898dab4a4b8 is in state STARTED 2025-05-13 23:44:25.322047 | orchestrator | 2025-05-13 23:44:25 | INFO  | Task 431a9fc2-c86e-4eb3-8b59-dfef1748524e is in state STARTED 2025-05-13 23:44:25.322150 | orchestrator | 2025-05-13 23:44:25 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:44:28.376124 | orchestrator | 2025-05-13 23:44:28 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:44:28.378387 | orchestrator | 2025-05-13 23:44:28 | INFO  | Task d4c4b1b4-cd92-4f0c-b208-9898dab4a4b8 is in state STARTED 2025-05-13 23:44:28.380516 | orchestrator | 2025-05-13 23:44:28 | INFO  | Task 431a9fc2-c86e-4eb3-8b59-dfef1748524e is in state STARTED 2025-05-13 23:44:28.380542 | orchestrator | 2025-05-13 23:44:28 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:44:31.438079 | orchestrator | 2025-05-13 23:44:31 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:44:31.439165 | orchestrator | 2025-05-13 23:44:31 | INFO  | Task d4c4b1b4-cd92-4f0c-b208-9898dab4a4b8 is in state STARTED 2025-05-13 23:44:31.442301 | orchestrator | 2025-05-13 23:44:31 | INFO  | Task 431a9fc2-c86e-4eb3-8b59-dfef1748524e is in state STARTED 2025-05-13 23:44:31.442920 | orchestrator | 2025-05-13 23:44:31 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:44:34.482782 | orchestrator | 2025-05-13 23:44:34 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:44:34.484711 | orchestrator | 2025-05-13 23:44:34 | INFO  | Task d4c4b1b4-cd92-4f0c-b208-9898dab4a4b8 is in state STARTED 2025-05-13 23:44:34.486224 | orchestrator | 2025-05-13 23:44:34 | INFO  | Task 431a9fc2-c86e-4eb3-8b59-dfef1748524e is in state STARTED 2025-05-13 23:44:34.486281 | orchestrator | 2025-05-13 23:44:34 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:44:37.536701 | orchestrator | 2025-05-13 23:44:37 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:44:37.538263 | orchestrator | 2025-05-13 23:44:37 | INFO  | Task d4c4b1b4-cd92-4f0c-b208-9898dab4a4b8 is in state STARTED 2025-05-13 23:44:37.540228 | orchestrator | 2025-05-13 23:44:37 | INFO  | Task 431a9fc2-c86e-4eb3-8b59-dfef1748524e is in state STARTED 2025-05-13 23:44:37.540361 | orchestrator | 2025-05-13 23:44:37 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:44:40.597800 | orchestrator | 2025-05-13 23:44:40 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:44:40.599299 | orchestrator | 2025-05-13 23:44:40 | INFO  | Task d4c4b1b4-cd92-4f0c-b208-9898dab4a4b8 is in state STARTED 2025-05-13 23:44:40.605074 | orchestrator | 2025-05-13 23:44:40 | INFO  | Task 431a9fc2-c86e-4eb3-8b59-dfef1748524e is in state STARTED 2025-05-13 23:44:40.605105 | orchestrator | 2025-05-13 23:44:40 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:44:43.651196 | orchestrator | 2025-05-13 23:44:43 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:44:43.652816 | orchestrator | 2025-05-13 23:44:43 | INFO  | Task d4c4b1b4-cd92-4f0c-b208-9898dab4a4b8 is in state STARTED 2025-05-13 23:44:43.655726 | orchestrator | 2025-05-13 23:44:43 | INFO  | Task 431a9fc2-c86e-4eb3-8b59-dfef1748524e is in state SUCCESS 2025-05-13 23:44:43.657470 | orchestrator | 2025-05-13 23:44:43.657556 | orchestrator | 2025-05-13 23:44:43.657573 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-13 23:44:43.657586 | orchestrator | 2025-05-13 23:44:43.657598 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-13 23:44:43.657638 | orchestrator | Tuesday 13 May 2025 23:41:39 +0000 (0:00:00.266) 0:00:00.266 *********** 2025-05-13 23:44:43.657649 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:44:43.657662 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:44:43.657673 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:44:43.657684 | orchestrator | 2025-05-13 23:44:43.657695 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-13 23:44:43.657707 | orchestrator | Tuesday 13 May 2025 23:41:39 +0000 (0:00:00.298) 0:00:00.565 *********** 2025-05-13 23:44:43.657718 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-05-13 23:44:43.657729 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-05-13 23:44:43.657740 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-05-13 23:44:43.657779 | orchestrator | 2025-05-13 23:44:43.657792 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-05-13 23:44:43.657803 | orchestrator | 2025-05-13 23:44:43.657813 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-13 23:44:43.657824 | orchestrator | Tuesday 13 May 2025 23:41:39 +0000 (0:00:00.455) 0:00:01.020 *********** 2025-05-13 23:44:43.657835 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:44:43.657876 | orchestrator | 2025-05-13 23:44:43.657887 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-05-13 23:44:43.657898 | orchestrator | Tuesday 13 May 2025 23:41:40 +0000 (0:00:00.494) 0:00:01.515 *********** 2025-05-13 23:44:43.657908 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-13 23:44:43.657919 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-13 23:44:43.657930 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-13 23:44:43.657940 | orchestrator | 2025-05-13 23:44:43.657954 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-05-13 23:44:43.657973 | orchestrator | Tuesday 13 May 2025 23:41:40 +0000 (0:00:00.644) 0:00:02.159 *********** 2025-05-13 23:44:43.657995 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-13 23:44:43.658103 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-13 23:44:43.658145 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-13 23:44:43.658163 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-13 23:44:43.658191 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-13 23:44:43.658212 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-13 23:44:43.658226 | orchestrator | 2025-05-13 23:44:43.658238 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-13 23:44:43.658251 | orchestrator | Tuesday 13 May 2025 23:41:42 +0000 (0:00:01.784) 0:00:03.944 *********** 2025-05-13 23:44:43.658262 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:44:43.658275 | orchestrator | 2025-05-13 23:44:43.658286 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-05-13 23:44:43.658298 | orchestrator | Tuesday 13 May 2025 23:41:43 +0000 (0:00:00.535) 0:00:04.479 *********** 2025-05-13 23:44:43.658321 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-13 23:44:43.658343 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-13 23:44:43.658357 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-13 23:44:43.658372 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-13 23:44:43.658396 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-13 23:44:43.658409 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-13 23:44:43.658428 | orchestrator | 2025-05-13 23:44:43.658440 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-05-13 23:44:43.658451 | orchestrator | Tuesday 13 May 2025 23:41:46 +0000 (0:00:02.844) 0:00:07.324 *********** 2025-05-13 23:44:43.658462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-13 23:44:43.658474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-13 23:44:43.658486 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:44:43.658499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-13 23:44:43.658518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-13 23:44:43.658537 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:44:43.658664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-13 23:44:43.658689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-13 23:44:43.658702 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:44:43.658713 | orchestrator | 2025-05-13 23:44:43.658724 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-05-13 23:44:43.658735 | orchestrator | Tuesday 13 May 2025 23:41:47 +0000 (0:00:01.295) 0:00:08.619 *********** 2025-05-13 23:44:43.658752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-13 23:44:43.658783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-13 23:44:43.658796 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:44:43.658807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-13 23:44:43.658820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-13 23:44:43.658832 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:44:43.658848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-13 23:44:43.658875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-13 23:44:43.658887 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:44:43.658897 | orchestrator | 2025-05-13 23:44:43.658908 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-05-13 23:44:43.658920 | orchestrator | Tuesday 13 May 2025 23:41:48 +0000 (0:00:00.928) 0:00:09.547 *********** 2025-05-13 23:44:43.658931 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-13 23:44:43.658947 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-13 23:44:43.658964 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-13 23:44:43.658983 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-13 23:44:43.659003 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-13 23:44:43.659016 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-13 23:44:43.659028 | orchestrator | 2025-05-13 23:44:43.659039 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-05-13 23:44:43.659050 | orchestrator | Tuesday 13 May 2025 23:41:50 +0000 (0:00:02.390) 0:00:11.938 *********** 2025-05-13 23:44:43.659061 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:44:43.659073 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:44:43.659083 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:44:43.659094 | orchestrator | 2025-05-13 23:44:43.659105 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-05-13 23:44:43.659116 | orchestrator | Tuesday 13 May 2025 23:41:53 +0000 (0:00:03.231) 0:00:15.170 *********** 2025-05-13 23:44:43.659127 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:44:43.659138 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:44:43.659148 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:44:43.659167 | orchestrator | 2025-05-13 23:44:43.659178 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-05-13 23:44:43.659189 | orchestrator | Tuesday 13 May 2025 23:41:55 +0000 (0:00:01.528) 0:00:16.699 *********** 2025-05-13 23:44:43.659206 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-13 23:44:43.659225 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-13 23:44:43.659237 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-13 23:44:43.659249 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-13 23:44:43.659267 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-13 23:44:43.659294 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-13 23:44:43.659306 | orchestrator | 2025-05-13 23:44:43.659317 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-13 23:44:43.659328 | orchestrator | Tuesday 13 May 2025 23:41:57 +0000 (0:00:01.919) 0:00:18.618 *********** 2025-05-13 23:44:43.659339 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:44:43.659350 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:44:43.659361 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:44:43.659372 | orchestrator | 2025-05-13 23:44:43.659383 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-05-13 23:44:43.659394 | orchestrator | Tuesday 13 May 2025 23:41:57 +0000 (0:00:00.283) 0:00:18.901 *********** 2025-05-13 23:44:43.659405 | orchestrator | 2025-05-13 23:44:43.659415 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-05-13 23:44:43.659426 | orchestrator | Tuesday 13 May 2025 23:41:57 +0000 (0:00:00.068) 0:00:18.970 *********** 2025-05-13 23:44:43.659437 | orchestrator | 2025-05-13 23:44:43.659447 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-05-13 23:44:43.659458 | orchestrator | Tuesday 13 May 2025 23:41:57 +0000 (0:00:00.067) 0:00:19.037 *********** 2025-05-13 23:44:43.659469 | orchestrator | 2025-05-13 23:44:43.659480 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-05-13 23:44:43.659491 | orchestrator | Tuesday 13 May 2025 23:41:58 +0000 (0:00:00.194) 0:00:19.232 *********** 2025-05-13 23:44:43.659502 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:44:43.659513 | orchestrator | 2025-05-13 23:44:43.659523 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-05-13 23:44:43.659534 | orchestrator | Tuesday 13 May 2025 23:41:58 +0000 (0:00:00.184) 0:00:19.417 *********** 2025-05-13 23:44:43.659545 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:44:43.659556 | orchestrator | 2025-05-13 23:44:43.659567 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-05-13 23:44:43.659577 | orchestrator | Tuesday 13 May 2025 23:41:58 +0000 (0:00:00.227) 0:00:19.644 *********** 2025-05-13 23:44:43.659588 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:44:43.659652 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:44:43.659675 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:44:43.659694 | orchestrator | 2025-05-13 23:44:43.659715 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-05-13 23:44:43.659735 | orchestrator | Tuesday 13 May 2025 23:43:14 +0000 (0:01:15.709) 0:01:35.354 *********** 2025-05-13 23:44:43.659755 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:44:43.659771 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:44:43.659782 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:44:43.659793 | orchestrator | 2025-05-13 23:44:43.659803 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-13 23:44:43.659814 | orchestrator | Tuesday 13 May 2025 23:44:32 +0000 (0:01:18.000) 0:02:53.354 *********** 2025-05-13 23:44:43.659825 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:44:43.659835 | orchestrator | 2025-05-13 23:44:43.659846 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-05-13 23:44:43.659856 | orchestrator | Tuesday 13 May 2025 23:44:32 +0000 (0:00:00.672) 0:02:54.027 *********** 2025-05-13 23:44:43.659867 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:44:43.659878 | orchestrator | 2025-05-13 23:44:43.659888 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-05-13 23:44:43.659898 | orchestrator | Tuesday 13 May 2025 23:44:35 +0000 (0:00:02.329) 0:02:56.356 *********** 2025-05-13 23:44:43.659909 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:44:43.659919 | orchestrator | 2025-05-13 23:44:43.659930 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-05-13 23:44:43.659940 | orchestrator | Tuesday 13 May 2025 23:44:37 +0000 (0:00:02.176) 0:02:58.532 *********** 2025-05-13 23:44:43.659957 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:44:43.659968 | orchestrator | 2025-05-13 23:44:43.659979 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-05-13 23:44:43.659989 | orchestrator | Tuesday 13 May 2025 23:44:39 +0000 (0:00:02.623) 0:03:01.156 *********** 2025-05-13 23:44:43.660000 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:44:43.660010 | orchestrator | 2025-05-13 23:44:43.660021 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 23:44:43.660032 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-13 23:44:43.660045 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-13 23:44:43.660056 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-13 23:44:43.660066 | orchestrator | 2025-05-13 23:44:43.660077 | orchestrator | 2025-05-13 23:44:43.660088 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 23:44:43.660106 | orchestrator | Tuesday 13 May 2025 23:44:42 +0000 (0:00:02.249) 0:03:03.405 *********** 2025-05-13 23:44:43.660117 | orchestrator | =============================================================================== 2025-05-13 23:44:43.660128 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 78.00s 2025-05-13 23:44:43.660138 | orchestrator | opensearch : Restart opensearch container ------------------------------ 75.71s 2025-05-13 23:44:43.660149 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.23s 2025-05-13 23:44:43.660160 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.84s 2025-05-13 23:44:43.660170 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.62s 2025-05-13 23:44:43.660181 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.39s 2025-05-13 23:44:43.660191 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.33s 2025-05-13 23:44:43.660210 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.25s 2025-05-13 23:44:43.660221 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.18s 2025-05-13 23:44:43.660231 | orchestrator | opensearch : Check opensearch containers -------------------------------- 1.92s 2025-05-13 23:44:43.660242 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.78s 2025-05-13 23:44:43.660252 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.53s 2025-05-13 23:44:43.660263 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.30s 2025-05-13 23:44:43.660274 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.93s 2025-05-13 23:44:43.660284 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.67s 2025-05-13 23:44:43.660294 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.64s 2025-05-13 23:44:43.660305 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.54s 2025-05-13 23:44:43.660315 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.49s 2025-05-13 23:44:43.660326 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.46s 2025-05-13 23:44:43.660336 | orchestrator | opensearch : Flush handlers --------------------------------------------- 0.33s 2025-05-13 23:44:43.660347 | orchestrator | 2025-05-13 23:44:43 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:44:46.715316 | orchestrator | 2025-05-13 23:44:46 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:44:46.718140 | orchestrator | 2025-05-13 23:44:46 | INFO  | Task d4c4b1b4-cd92-4f0c-b208-9898dab4a4b8 is in state STARTED 2025-05-13 23:44:46.718538 | orchestrator | 2025-05-13 23:44:46 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:44:49.777929 | orchestrator | 2025-05-13 23:44:49 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:44:49.781024 | orchestrator | 2025-05-13 23:44:49 | INFO  | Task d4c4b1b4-cd92-4f0c-b208-9898dab4a4b8 is in state STARTED 2025-05-13 23:44:49.781199 | orchestrator | 2025-05-13 23:44:49 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:44:52.832864 | orchestrator | 2025-05-13 23:44:52 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:44:52.834708 | orchestrator | 2025-05-13 23:44:52 | INFO  | Task d4c4b1b4-cd92-4f0c-b208-9898dab4a4b8 is in state STARTED 2025-05-13 23:44:52.834752 | orchestrator | 2025-05-13 23:44:52 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:44:55.887042 | orchestrator | 2025-05-13 23:44:55 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:44:55.887989 | orchestrator | 2025-05-13 23:44:55 | INFO  | Task d4c4b1b4-cd92-4f0c-b208-9898dab4a4b8 is in state STARTED 2025-05-13 23:44:55.888343 | orchestrator | 2025-05-13 23:44:55 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:44:58.942996 | orchestrator | 2025-05-13 23:44:58 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:44:58.944717 | orchestrator | 2025-05-13 23:44:58 | INFO  | Task d4c4b1b4-cd92-4f0c-b208-9898dab4a4b8 is in state STARTED 2025-05-13 23:44:58.944758 | orchestrator | 2025-05-13 23:44:58 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:45:01.998638 | orchestrator | 2025-05-13 23:45:01 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:45:02.000968 | orchestrator | 2025-05-13 23:45:01 | INFO  | Task d4c4b1b4-cd92-4f0c-b208-9898dab4a4b8 is in state STARTED 2025-05-13 23:45:02.001201 | orchestrator | 2025-05-13 23:45:01 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:45:05.052119 | orchestrator | 2025-05-13 23:45:05 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:45:05.052757 | orchestrator | 2025-05-13 23:45:05 | INFO  | Task d4c4b1b4-cd92-4f0c-b208-9898dab4a4b8 is in state STARTED 2025-05-13 23:45:05.052856 | orchestrator | 2025-05-13 23:45:05 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:45:08.097080 | orchestrator | 2025-05-13 23:45:08 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:45:08.098701 | orchestrator | 2025-05-13 23:45:08 | INFO  | Task d4c4b1b4-cd92-4f0c-b208-9898dab4a4b8 is in state STARTED 2025-05-13 23:45:08.098790 | orchestrator | 2025-05-13 23:45:08 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:45:11.152431 | orchestrator | 2025-05-13 23:45:11 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:45:11.154073 | orchestrator | 2025-05-13 23:45:11 | INFO  | Task d4c4b1b4-cd92-4f0c-b208-9898dab4a4b8 is in state STARTED 2025-05-13 23:45:11.154119 | orchestrator | 2025-05-13 23:45:11 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:45:14.200892 | orchestrator | 2025-05-13 23:45:14 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:45:14.201830 | orchestrator | 2025-05-13 23:45:14 | INFO  | Task d4c4b1b4-cd92-4f0c-b208-9898dab4a4b8 is in state STARTED 2025-05-13 23:45:14.201872 | orchestrator | 2025-05-13 23:45:14 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:45:17.253531 | orchestrator | 2025-05-13 23:45:17 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:45:17.256246 | orchestrator | 2025-05-13 23:45:17 | INFO  | Task d4c4b1b4-cd92-4f0c-b208-9898dab4a4b8 is in state STARTED 2025-05-13 23:45:17.256304 | orchestrator | 2025-05-13 23:45:17 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:45:20.311385 | orchestrator | 2025-05-13 23:45:20 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:45:20.312363 | orchestrator | 2025-05-13 23:45:20 | INFO  | Task d4c4b1b4-cd92-4f0c-b208-9898dab4a4b8 is in state STARTED 2025-05-13 23:45:20.312389 | orchestrator | 2025-05-13 23:45:20 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:45:23.363291 | orchestrator | 2025-05-13 23:45:23 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:45:23.365241 | orchestrator | 2025-05-13 23:45:23 | INFO  | Task d4c4b1b4-cd92-4f0c-b208-9898dab4a4b8 is in state STARTED 2025-05-13 23:45:23.365299 | orchestrator | 2025-05-13 23:45:23 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:45:26.417473 | orchestrator | 2025-05-13 23:45:26 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:45:26.419723 | orchestrator | 2025-05-13 23:45:26 | INFO  | Task d4c4b1b4-cd92-4f0c-b208-9898dab4a4b8 is in state STARTED 2025-05-13 23:45:26.419783 | orchestrator | 2025-05-13 23:45:26 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:45:29.469223 | orchestrator | 2025-05-13 23:45:29 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:45:29.470341 | orchestrator | 2025-05-13 23:45:29 | INFO  | Task d4c4b1b4-cd92-4f0c-b208-9898dab4a4b8 is in state STARTED 2025-05-13 23:45:29.470374 | orchestrator | 2025-05-13 23:45:29 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:45:32.524660 | orchestrator | 2025-05-13 23:45:32 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:45:32.527471 | orchestrator | 2025-05-13 23:45:32 | INFO  | Task d4c4b1b4-cd92-4f0c-b208-9898dab4a4b8 is in state STARTED 2025-05-13 23:45:32.528006 | orchestrator | 2025-05-13 23:45:32 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:45:35.572947 | orchestrator | 2025-05-13 23:45:35 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:45:35.574414 | orchestrator | 2025-05-13 23:45:35 | INFO  | Task d4c4b1b4-cd92-4f0c-b208-9898dab4a4b8 is in state STARTED 2025-05-13 23:45:35.574437 | orchestrator | 2025-05-13 23:45:35 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:45:38.624753 | orchestrator | 2025-05-13 23:45:38 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:45:38.626179 | orchestrator | 2025-05-13 23:45:38 | INFO  | Task d4c4b1b4-cd92-4f0c-b208-9898dab4a4b8 is in state STARTED 2025-05-13 23:45:38.626292 | orchestrator | 2025-05-13 23:45:38 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:45:41.664930 | orchestrator | 2025-05-13 23:45:41 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:45:41.666643 | orchestrator | 2025-05-13 23:45:41 | INFO  | Task d4c4b1b4-cd92-4f0c-b208-9898dab4a4b8 is in state STARTED 2025-05-13 23:45:41.666696 | orchestrator | 2025-05-13 23:45:41 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:45:44.711758 | orchestrator | 2025-05-13 23:45:44 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:45:44.712295 | orchestrator | 2025-05-13 23:45:44 | INFO  | Task d4c4b1b4-cd92-4f0c-b208-9898dab4a4b8 is in state STARTED 2025-05-13 23:45:44.712420 | orchestrator | 2025-05-13 23:45:44 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:45:47.769636 | orchestrator | 2025-05-13 23:45:47 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:45:47.769886 | orchestrator | 2025-05-13 23:45:47 | INFO  | Task d4c4b1b4-cd92-4f0c-b208-9898dab4a4b8 is in state SUCCESS 2025-05-13 23:45:47.771781 | orchestrator | 2025-05-13 23:45:47.771831 | orchestrator | 2025-05-13 23:45:47.771843 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-05-13 23:45:47.771855 | orchestrator | 2025-05-13 23:45:47.771866 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-05-13 23:45:47.771877 | orchestrator | Tuesday 13 May 2025 23:43:39 +0000 (0:00:00.687) 0:00:00.687 *********** 2025-05-13 23:45:47.771888 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:45:47.771900 | orchestrator | 2025-05-13 23:45:47.771971 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-05-13 23:45:47.771986 | orchestrator | Tuesday 13 May 2025 23:43:40 +0000 (0:00:00.709) 0:00:01.397 *********** 2025-05-13 23:45:47.771997 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:45:47.772010 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:45:47.772021 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:45:47.772064 | orchestrator | 2025-05-13 23:45:47.772157 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-05-13 23:45:47.772170 | orchestrator | Tuesday 13 May 2025 23:43:41 +0000 (0:00:00.625) 0:00:02.022 *********** 2025-05-13 23:45:47.772181 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:45:47.772192 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:45:47.772202 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:45:47.772213 | orchestrator | 2025-05-13 23:45:47.772223 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-05-13 23:45:47.772234 | orchestrator | Tuesday 13 May 2025 23:43:41 +0000 (0:00:00.332) 0:00:02.355 *********** 2025-05-13 23:45:47.772245 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:45:47.772255 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:45:47.772266 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:45:47.772276 | orchestrator | 2025-05-13 23:45:47.772287 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-05-13 23:45:47.772326 | orchestrator | Tuesday 13 May 2025 23:43:42 +0000 (0:00:00.806) 0:00:03.162 *********** 2025-05-13 23:45:47.772337 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:45:47.772347 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:45:47.772358 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:45:47.772402 | orchestrator | 2025-05-13 23:45:47.772414 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-05-13 23:45:47.772427 | orchestrator | Tuesday 13 May 2025 23:43:42 +0000 (0:00:00.301) 0:00:03.463 *********** 2025-05-13 23:45:47.772468 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:45:47.772481 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:45:47.772493 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:45:47.772505 | orchestrator | 2025-05-13 23:45:47.772517 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-05-13 23:45:47.772529 | orchestrator | Tuesday 13 May 2025 23:43:42 +0000 (0:00:00.312) 0:00:03.776 *********** 2025-05-13 23:45:47.772541 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:45:47.772552 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:45:47.772564 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:45:47.772713 | orchestrator | 2025-05-13 23:45:47.772727 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-05-13 23:45:47.772738 | orchestrator | Tuesday 13 May 2025 23:43:43 +0000 (0:00:00.304) 0:00:04.080 *********** 2025-05-13 23:45:47.772749 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:45:47.772762 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:45:47.772772 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:45:47.772783 | orchestrator | 2025-05-13 23:45:47.772794 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-05-13 23:45:47.772804 | orchestrator | Tuesday 13 May 2025 23:43:43 +0000 (0:00:00.504) 0:00:04.585 *********** 2025-05-13 23:45:47.772815 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:45:47.772825 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:45:47.772836 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:45:47.772847 | orchestrator | 2025-05-13 23:45:47.772857 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-05-13 23:45:47.772883 | orchestrator | Tuesday 13 May 2025 23:43:43 +0000 (0:00:00.295) 0:00:04.881 *********** 2025-05-13 23:45:47.772895 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-13 23:45:47.772905 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-13 23:45:47.772916 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-13 23:45:47.772927 | orchestrator | 2025-05-13 23:45:47.772937 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-05-13 23:45:47.772948 | orchestrator | Tuesday 13 May 2025 23:43:44 +0000 (0:00:00.659) 0:00:05.541 *********** 2025-05-13 23:45:47.772958 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:45:47.772969 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:45:47.772979 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:45:47.772990 | orchestrator | 2025-05-13 23:45:47.773000 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-05-13 23:45:47.773011 | orchestrator | Tuesday 13 May 2025 23:43:44 +0000 (0:00:00.403) 0:00:05.944 *********** 2025-05-13 23:45:47.773022 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-13 23:45:47.773033 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-13 23:45:47.773044 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-13 23:45:47.773054 | orchestrator | 2025-05-13 23:45:47.773091 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-05-13 23:45:47.773103 | orchestrator | Tuesday 13 May 2025 23:43:47 +0000 (0:00:02.135) 0:00:08.079 *********** 2025-05-13 23:45:47.773113 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-13 23:45:47.773136 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-13 23:45:47.773146 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-13 23:45:47.773157 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:45:47.773168 | orchestrator | 2025-05-13 23:45:47.773178 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-05-13 23:45:47.773204 | orchestrator | Tuesday 13 May 2025 23:43:47 +0000 (0:00:00.382) 0:00:08.462 *********** 2025-05-13 23:45:47.773217 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-05-13 23:45:47.773231 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-05-13 23:45:47.773243 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-05-13 23:45:47.773253 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:45:47.773264 | orchestrator | 2025-05-13 23:45:47.773275 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-05-13 23:45:47.773286 | orchestrator | Tuesday 13 May 2025 23:43:48 +0000 (0:00:00.835) 0:00:09.298 *********** 2025-05-13 23:45:47.773298 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-13 23:45:47.773312 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-13 23:45:47.773324 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-13 23:45:47.773335 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:45:47.773346 | orchestrator | 2025-05-13 23:45:47.773356 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-05-13 23:45:47.773367 | orchestrator | Tuesday 13 May 2025 23:43:48 +0000 (0:00:00.167) 0:00:09.465 *********** 2025-05-13 23:45:47.773385 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '0231d90da1c6', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-05-13 23:43:45.668546', 'end': '2025-05-13 23:43:45.719623', 'delta': '0:00:00.051077', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['0231d90da1c6'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-05-13 23:45:47.773406 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '87c19fce26f1', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-05-13 23:43:46.390300', 'end': '2025-05-13 23:43:46.431244', 'delta': '0:00:00.040944', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['87c19fce26f1'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-05-13 23:45:47.773427 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '05e8ffeb3d71', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-05-13 23:43:46.926709', 'end': '2025-05-13 23:43:46.973644', 'delta': '0:00:00.046935', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['05e8ffeb3d71'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-05-13 23:45:47.773439 | orchestrator | 2025-05-13 23:45:47.773450 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-05-13 23:45:47.773461 | orchestrator | Tuesday 13 May 2025 23:43:48 +0000 (0:00:00.378) 0:00:09.844 *********** 2025-05-13 23:45:47.773472 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:45:47.773482 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:45:47.773493 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:45:47.773503 | orchestrator | 2025-05-13 23:45:47.773514 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-05-13 23:45:47.773524 | orchestrator | Tuesday 13 May 2025 23:43:49 +0000 (0:00:00.453) 0:00:10.298 *********** 2025-05-13 23:45:47.773535 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-05-13 23:45:47.773545 | orchestrator | 2025-05-13 23:45:47.773556 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-05-13 23:45:47.773566 | orchestrator | Tuesday 13 May 2025 23:43:51 +0000 (0:00:01.651) 0:00:11.950 *********** 2025-05-13 23:45:47.773576 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:45:47.773605 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:45:47.773616 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:45:47.773627 | orchestrator | 2025-05-13 23:45:47.773637 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-05-13 23:45:47.773648 | orchestrator | Tuesday 13 May 2025 23:43:51 +0000 (0:00:00.299) 0:00:12.250 *********** 2025-05-13 23:45:47.773658 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:45:47.773669 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:45:47.773679 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:45:47.773690 | orchestrator | 2025-05-13 23:45:47.773701 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-05-13 23:45:47.773711 | orchestrator | Tuesday 13 May 2025 23:43:51 +0000 (0:00:00.384) 0:00:12.634 *********** 2025-05-13 23:45:47.773722 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:45:47.773733 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:45:47.773743 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:45:47.773754 | orchestrator | 2025-05-13 23:45:47.773764 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-05-13 23:45:47.773775 | orchestrator | Tuesday 13 May 2025 23:43:52 +0000 (0:00:00.488) 0:00:13.122 *********** 2025-05-13 23:45:47.773785 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:45:47.773796 | orchestrator | 2025-05-13 23:45:47.773806 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-05-13 23:45:47.773823 | orchestrator | Tuesday 13 May 2025 23:43:52 +0000 (0:00:00.133) 0:00:13.256 *********** 2025-05-13 23:45:47.773834 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:45:47.773845 | orchestrator | 2025-05-13 23:45:47.773856 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-05-13 23:45:47.773866 | orchestrator | Tuesday 13 May 2025 23:43:52 +0000 (0:00:00.224) 0:00:13.481 *********** 2025-05-13 23:45:47.773877 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:45:47.773887 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:45:47.773903 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:45:47.773914 | orchestrator | 2025-05-13 23:45:47.773924 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-05-13 23:45:47.773935 | orchestrator | Tuesday 13 May 2025 23:43:52 +0000 (0:00:00.297) 0:00:13.778 *********** 2025-05-13 23:45:47.773945 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:45:47.773956 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:45:47.773967 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:45:47.773977 | orchestrator | 2025-05-13 23:45:47.773988 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-05-13 23:45:47.773998 | orchestrator | Tuesday 13 May 2025 23:43:53 +0000 (0:00:00.301) 0:00:14.080 *********** 2025-05-13 23:45:47.774009 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:45:47.774068 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:45:47.774082 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:45:47.774093 | orchestrator | 2025-05-13 23:45:47.774104 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-05-13 23:45:47.774115 | orchestrator | Tuesday 13 May 2025 23:43:53 +0000 (0:00:00.512) 0:00:14.592 *********** 2025-05-13 23:45:47.774126 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:45:47.774136 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:45:47.774147 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:45:47.774157 | orchestrator | 2025-05-13 23:45:47.774168 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-05-13 23:45:47.774179 | orchestrator | Tuesday 13 May 2025 23:43:53 +0000 (0:00:00.352) 0:00:14.944 *********** 2025-05-13 23:45:47.774189 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:45:47.774200 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:45:47.774211 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:45:47.774222 | orchestrator | 2025-05-13 23:45:47.774232 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-05-13 23:45:47.774243 | orchestrator | Tuesday 13 May 2025 23:43:54 +0000 (0:00:00.303) 0:00:15.248 *********** 2025-05-13 23:45:47.774254 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:45:47.774265 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:45:47.774276 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:45:47.774286 | orchestrator | 2025-05-13 23:45:47.774297 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-05-13 23:45:47.774316 | orchestrator | Tuesday 13 May 2025 23:43:54 +0000 (0:00:00.317) 0:00:15.566 *********** 2025-05-13 23:45:47.774327 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:45:47.774338 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:45:47.774348 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:45:47.774359 | orchestrator | 2025-05-13 23:45:47.774370 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-05-13 23:45:47.774380 | orchestrator | Tuesday 13 May 2025 23:43:55 +0000 (0:00:00.515) 0:00:16.081 *********** 2025-05-13 23:45:47.774392 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--cf553414--fd5b--54a4--812a--8e7012220720-osd--block--cf553414--fd5b--54a4--812a--8e7012220720', 'dm-uuid-LVM-1pX9WnHfeMT9nTIQouj7wiTl4rr7tArytNXgfJr31zE1gxodC69TGdXblsuHSIqw'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-13 23:45:47.774412 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9ea6307c--c51b--54ed--aeb4--48fe7d66605c-osd--block--9ea6307c--c51b--54ed--aeb4--48fe7d66605c', 'dm-uuid-LVM-RSGUaRafehkiir5SfOds7jROuPzmjzWVxLyJIWSEcWCDPJyfdhiOuZzq4LK6qQoo'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-13 23:45:47.774424 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 23:45:47.774436 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 23:45:47.774453 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 23:45:47.774464 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 23:45:47.774476 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 23:45:47.774494 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 23:45:47.774505 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 23:45:47.774516 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 23:45:47.774536 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8f56c737--ae06--5042--be62--d4d7430a3913-osd--block--8f56c737--ae06--5042--be62--d4d7430a3913', 'dm-uuid-LVM-X31KRVqgJz32iEekGhM2Qq1k078Hw2qZdb03amgeAWfUc6Oza19mbyk8twnSEAIr'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-13 23:45:47.774557 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c6453a8e-6632-42ad-a179-435c946212ec', 'scsi-SQEMU_QEMU_HARDDISK_c6453a8e-6632-42ad-a179-435c946212ec'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c6453a8e-6632-42ad-a179-435c946212ec-part1', 'scsi-SQEMU_QEMU_HARDDISK_c6453a8e-6632-42ad-a179-435c946212ec-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c6453a8e-6632-42ad-a179-435c946212ec-part14', 'scsi-SQEMU_QEMU_HARDDISK_c6453a8e-6632-42ad-a179-435c946212ec-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c6453a8e-6632-42ad-a179-435c946212ec-part15', 'scsi-SQEMU_QEMU_HARDDISK_c6453a8e-6632-42ad-a179-435c946212ec-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c6453a8e-6632-42ad-a179-435c946212ec-part16', 'scsi-SQEMU_QEMU_HARDDISK_c6453a8e-6632-42ad-a179-435c946212ec-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-13 23:45:47.774605 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b9ab4848--02bd--5b2a--a6cc--ded55503b6b3-osd--block--b9ab4848--02bd--5b2a--a6cc--ded55503b6b3', 'dm-uuid-LVM-4jWP9izaLLqkoflDNqUAXrWS6p6173C51LsIYNJBAT5kTNs3a3kKM70MQvfSKZft'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-13 23:45:47.774619 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--cf553414--fd5b--54a4--812a--8e7012220720-osd--block--cf553414--fd5b--54a4--812a--8e7012220720'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-GFR53h-bjpN-LvAK-K4J7-1dHu-eaMe-SvdOns', 'scsi-0QEMU_QEMU_HARDDISK_2123f305-4e6b-4736-99ab-18aaa07aaf45', 'scsi-SQEMU_QEMU_HARDDISK_2123f305-4e6b-4736-99ab-18aaa07aaf45'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-13 23:45:47.774638 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 23:45:47.774650 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--9ea6307c--c51b--54ed--aeb4--48fe7d66605c-osd--block--9ea6307c--c51b--54ed--aeb4--48fe7d66605c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-YRX48y-tAU9-6MkF-cnzG-Gs1X-DKt5-tiM1Jb', 'scsi-0QEMU_QEMU_HARDDISK_46243ec1-9f30-4dd7-b280-49f134625000', 'scsi-SQEMU_QEMU_HARDDISK_46243ec1-9f30-4dd7-b280-49f134625000'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-13 23:45:47.774662 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 23:45:47.774678 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_213ab59a-cb73-4407-9705-0b2ca8256438', 'scsi-SQEMU_QEMU_HARDDISK_213ab59a-cb73-4407-9705-0b2ca8256438'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-13 23:45:47.774691 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 23:45:47.774710 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-13-22-38-28-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-13 23:45:47.774722 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 23:45:47.774740 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 23:45:47.774752 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 23:45:47.774763 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 23:45:47.774774 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 23:45:47.774804 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_318ab0b7-de56-4f87-ab50-209f607532c7', 'scsi-SQEMU_QEMU_HARDDISK_318ab0b7-de56-4f87-ab50-209f607532c7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_318ab0b7-de56-4f87-ab50-209f607532c7-part1', 'scsi-SQEMU_QEMU_HARDDISK_318ab0b7-de56-4f87-ab50-209f607532c7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_318ab0b7-de56-4f87-ab50-209f607532c7-part14', 'scsi-SQEMU_QEMU_HARDDISK_318ab0b7-de56-4f87-ab50-209f607532c7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_318ab0b7-de56-4f87-ab50-209f607532c7-part15', 'scsi-SQEMU_QEMU_HARDDISK_318ab0b7-de56-4f87-ab50-209f607532c7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_318ab0b7-de56-4f87-ab50-209f607532c7-part16', 'scsi-SQEMU_QEMU_HARDDISK_318ab0b7-de56-4f87-ab50-209f607532c7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-13 23:45:47.774826 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--8f56c737--ae06--5042--be62--d4d7430a3913-osd--block--8f56c737--ae06--5042--be62--d4d7430a3913'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-kKUXcV-NRc8-Te46-jGWo-Ip4f-DlWw-6i6xRr', 'scsi-0QEMU_QEMU_HARDDISK_c475673a-0096-49dd-a2ab-dba7e6677c05', 'scsi-SQEMU_QEMU_HARDDISK_c475673a-0096-49dd-a2ab-dba7e6677c05'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-13 23:45:47.774838 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--b9ab4848--02bd--5b2a--a6cc--ded55503b6b3-osd--block--b9ab4848--02bd--5b2a--a6cc--ded55503b6b3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-iALSnE-fged-l0II-iQ2Q-DplQ-iluv-DkubK5', 'scsi-0QEMU_QEMU_HARDDISK_a5357627-6c2a-405a-984b-26b28125b648', 'scsi-SQEMU_QEMU_HARDDISK_a5357627-6c2a-405a-984b-26b28125b648'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-13 23:45:47.774850 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:45:47.774861 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0156a383-42b8-4f65-bebb-758e8d549677', 'scsi-SQEMU_QEMU_HARDDISK_0156a383-42b8-4f65-bebb-758e8d549677'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-13 23:45:47.774878 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-13-22-38-30-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-13 23:45:47.774915 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:45:47.774927 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--53cfcf66--6862--5829--a71b--dc902cfbd9df-osd--block--53cfcf66--6862--5829--a71b--dc902cfbd9df', 'dm-uuid-LVM-u04ANOtmOdGz1Vzl9h6jqIKzRS7efN642z7ZMI1f66JIrWUs8jF7PnqjXXBvMoRy'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-13 23:45:47.774945 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d153f4c4--5597--54b4--b460--41e490b92c19-osd--block--d153f4c4--5597--54b4--b460--41e490b92c19', 'dm-uuid-LVM-PYU5eiYmArZZx9l0IRv7NkCQeLmEUpEudrGIxN3Awr1GUIw1Dw6FjNk2029z1Y9Y'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-13 23:45:47.774964 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 23:45:47.774977 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 23:45:47.774988 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 23:45:47.774999 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 23:45:47.775010 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 23:45:47.775026 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 23:45:47.775036 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 23:45:47.775047 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 23:45:47.775068 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b255196f-0cab-4746-bd7d-248a31197f78', 'scsi-SQEMU_QEMU_HARDDISK_b255196f-0cab-4746-bd7d-248a31197f78'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b255196f-0cab-4746-bd7d-248a31197f78-part1', 'scsi-SQEMU_QEMU_HARDDISK_b255196f-0cab-4746-bd7d-248a31197f78-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b255196f-0cab-4746-bd7d-248a31197f78-part14', 'scsi-SQEMU_QEMU_HARDDISK_b255196f-0cab-4746-bd7d-248a31197f78-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b255196f-0cab-4746-bd7d-248a31197f78-part15', 'scsi-SQEMU_QEMU_HARDDISK_b255196f-0cab-4746-bd7d-248a31197f78-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b255196f-0cab-4746-bd7d-248a31197f78-part16', 'scsi-SQEMU_QEMU_HARDDISK_b255196f-0cab-4746-bd7d-248a31197f78-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-13 23:45:47.775087 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--53cfcf66--6862--5829--a71b--dc902cfbd9df-osd--block--53cfcf66--6862--5829--a71b--dc902cfbd9df'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-rHhLav-nIRm-kwul-12gR-Y0i1-rO5X-mga0H8', 'scsi-0QEMU_QEMU_HARDDISK_61dae38b-1d40-412d-9df6-8d9734e6ced8', 'scsi-SQEMU_QEMU_HARDDISK_61dae38b-1d40-412d-9df6-8d9734e6ced8'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-13 23:45:47.775104 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--d153f4c4--5597--54b4--b460--41e490b92c19-osd--block--d153f4c4--5597--54b4--b460--41e490b92c19'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-zRfxmE-geHX-KaCf-Tjbv-h6oW-e94U-M8FcSh', 'scsi-0QEMU_QEMU_HARDDISK_0aeac9b9-4df2-4d9e-975e-68588115061e', 'scsi-SQEMU_QEMU_HARDDISK_0aeac9b9-4df2-4d9e-975e-68588115061e'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-13 23:45:47.775116 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_55ed4948-9fe5-49ab-9e57-6f6f508ce8e3', 'scsi-SQEMU_QEMU_HARDDISK_55ed4948-9fe5-49ab-9e57-6f6f508ce8e3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-13 23:45:47.775140 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-13-22-38-26-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-13 23:45:47.775152 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:45:47.775163 | orchestrator | 2025-05-13 23:45:47.775174 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-05-13 23:45:47.775185 | orchestrator | Tuesday 13 May 2025 23:43:55 +0000 (0:00:00.642) 0:00:16.724 *********** 2025-05-13 23:45:47.775196 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--cf553414--fd5b--54a4--812a--8e7012220720-osd--block--cf553414--fd5b--54a4--812a--8e7012220720', 'dm-uuid-LVM-1pX9WnHfeMT9nTIQouj7wiTl4rr7tArytNXgfJr31zE1gxodC69TGdXblsuHSIqw'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:45:47.775209 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9ea6307c--c51b--54ed--aeb4--48fe7d66605c-osd--block--9ea6307c--c51b--54ed--aeb4--48fe7d66605c', 'dm-uuid-LVM-RSGUaRafehkiir5SfOds7jROuPzmjzWVxLyJIWSEcWCDPJyfdhiOuZzq4LK6qQoo'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:45:47.775220 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:45:47.775237 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:45:47.775249 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:45:47.775274 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:45:47.775286 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:45:47.775298 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8f56c737--ae06--5042--be62--d4d7430a3913-osd--block--8f56c737--ae06--5042--be62--d4d7430a3913', 'dm-uuid-LVM-X31KRVqgJz32iEekGhM2Qq1k078Hw2qZdb03amgeAWfUc6Oza19mbyk8twnSEAIr'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:45:47.775309 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:45:47.775326 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b9ab4848--02bd--5b2a--a6cc--ded55503b6b3-osd--block--b9ab4848--02bd--5b2a--a6cc--ded55503b6b3', 'dm-uuid-LVM-4jWP9izaLLqkoflDNqUAXrWS6p6173C51LsIYNJBAT5kTNs3a3kKM70MQvfSKZft'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:45:47.775344 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:45:47.775363 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:45:47.775374 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:45:47.775386 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:45:47.775404 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c6453a8e-6632-42ad-a179-435c946212ec', 'scsi-SQEMU_QEMU_HARDDISK_c6453a8e-6632-42ad-a179-435c946212ec'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c6453a8e-6632-42ad-a179-435c946212ec-part1', 'scsi-SQEMU_QEMU_HARDDISK_c6453a8e-6632-42ad-a179-435c946212ec-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c6453a8e-6632-42ad-a179-435c946212ec-part14', 'scsi-SQEMU_QEMU_HARDDISK_c6453a8e-6632-42ad-a179-435c946212ec-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c6453a8e-6632-42ad-a179-435c946212ec-part15', 'scsi-SQEMU_QEMU_HARDDISK_c6453a8e-6632-42ad-a179-435c946212ec-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c6453a8e-6632-42ad-a179-435c946212ec-part16', 'scsi-SQEMU_QEMU_HARDDISK_c6453a8e-6632-42ad-a179-435c946212ec-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:45:47.775429 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:45:47.775441 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--cf553414--fd5b--54a4--812a--8e7012220720-osd--block--cf553414--fd5b--54a4--812a--8e7012220720'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-GFR53h-bjpN-LvAK-K4J7-1dHu-eaMe-SvdOns', 'scsi-0QEMU_QEMU_HARDDISK_2123f305-4e6b-4736-99ab-18aaa07aaf45', 'scsi-SQEMU_QEMU_HARDDISK_2123f305-4e6b-4736-99ab-18aaa07aaf45'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:45:47.775453 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:45:47.775465 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:45:47.775481 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--9ea6307c--c51b--54ed--aeb4--48fe7d66605c-osd--block--9ea6307c--c51b--54ed--aeb4--48fe7d66605c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-YRX48y-tAU9-6MkF-cnzG-Gs1X-DKt5-tiM1Jb', 'scsi-0QEMU_QEMU_HARDDISK_46243ec1-9f30-4dd7-b280-49f134625000', 'scsi-SQEMU_QEMU_HARDDISK_46243ec1-9f30-4dd7-b280-49f134625000'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:45:47.775503 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:45:47.775521 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:45:47.775532 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_213ab59a-cb73-4407-9705-0b2ca8256438', 'scsi-SQEMU_QEMU_HARDDISK_213ab59a-cb73-4407-9705-0b2ca8256438'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:45:47.775544 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:45:47.775560 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-13-22-38-28-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:45:47.775600 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:45:47.775622 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_318ab0b7-de56-4f87-ab50-209f607532c7', 'scsi-SQEMU_QEMU_HARDDISK_318ab0b7-de56-4f87-ab50-209f607532c7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_318ab0b7-de56-4f87-ab50-209f607532c7-part1', 'scsi-SQEMU_QEMU_HARDDISK_318ab0b7-de56-4f87-ab50-209f607532c7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_318ab0b7-de56-4f87-ab50-209f607532c7-part14', 'scsi-SQEMU_QEMU_HARDDISK_318ab0b7-de56-4f87-ab50-209f607532c7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_318ab0b7-de56-4f87-ab50-209f607532c7-part15', 'scsi-SQEMU_QEMU_HARDDISK_318ab0b7-de56-4f87-ab50-209f607532c7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_318ab0b7-de56-4f87-ab50-209f607532c7-part16', 'scsi-SQEMU_QEMU_HARDDISK_318ab0b7-de56-4f87-ab50-209f607532c7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:45:47.775635 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--8f56c737--ae06--5042--be62--d4d7430a3913-osd--block--8f56c737--ae06--5042--be62--d4d7430a3913'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-kKUXcV-NRc8-Te46-jGWo-Ip4f-DlWw-6i6xRr', 'scsi-0QEMU_QEMU_HARDDISK_c475673a-0096-49dd-a2ab-dba7e6677c05', 'scsi-SQEMU_QEMU_HARDDISK_c475673a-0096-49dd-a2ab-dba7e6677c05'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:45:47.775652 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--53cfcf66--6862--5829--a71b--dc902cfbd9df-osd--block--53cfcf66--6862--5829--a71b--dc902cfbd9df', 'dm-uuid-LVM-u04ANOtmOdGz1Vzl9h6jqIKzRS7efN642z7ZMI1f66JIrWUs8jF7PnqjXXBvMoRy'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:45:47.775671 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--b9ab4848--02bd--5b2a--a6cc--ded55503b6b3-osd--block--b9ab4848--02bd--5b2a--a6cc--ded55503b6b3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-iALSnE-fged-l0II-iQ2Q-DplQ-iluv-DkubK5', 'scsi-0QEMU_QEMU_HARDDISK_a5357627-6c2a-405a-984b-26b28125b648', 'scsi-SQEMU_QEMU_HARDDISK_a5357627-6c2a-405a-984b-26b28125b648'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:45:47.775694 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d153f4c4--5597--54b4--b460--41e490b92c19-osd--block--d153f4c4--5597--54b4--b460--41e490b92c19', 'dm-uuid-LVM-PYU5eiYmArZZx9l0IRv7NkCQeLmEUpEudrGIxN3Awr1GUIw1Dw6FjNk2029z1Y9Y'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:45:47.775705 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0156a383-42b8-4f65-bebb-758e8d549677', 'scsi-SQEMU_QEMU_HARDDISK_0156a383-42b8-4f65-bebb-758e8d549677'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:45:47.775717 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:45:47.775728 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-13-22-38-30-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:45:47.775750 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:45:47.775762 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:45:47.775773 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:45:47.775792 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:45:47.775803 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:45:47.775814 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:45:47.775826 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:45:47.775844 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:45:47.775871 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b255196f-0cab-4746-bd7d-248a31197f78', 'scsi-SQEMU_QEMU_HARDDISK_b255196f-0cab-4746-bd7d-248a31197f78'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b255196f-0cab-4746-bd7d-248a31197f78-part1', 'scsi-SQEMU_QEMU_HARDDISK_b255196f-0cab-4746-bd7d-248a31197f78-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b255196f-0cab-4746-bd7d-248a31197f78-part14', 'scsi-SQEMU_QEMU_HARDDISK_b255196f-0cab-4746-bd7d-248a31197f78-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b255196f-0cab-4746-bd7d-248a31197f78-part15', 'scsi-SQEMU_QEMU_HARDDISK_b255196f-0cab-4746-bd7d-248a31197f78-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b255196f-0cab-4746-bd7d-248a31197f78-part16', 'scsi-SQEMU_QEMU_HARDDISK_b255196f-0cab-4746-bd7d-248a31197f78-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:45:47.775883 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--53cfcf66--6862--5829--a71b--dc902cfbd9df-osd--block--53cfcf66--6862--5829--a71b--dc902cfbd9df'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-rHhLav-nIRm-kwul-12gR-Y0i1-rO5X-mga0H8', 'scsi-0QEMU_QEMU_HARDDISK_61dae38b-1d40-412d-9df6-8d9734e6ced8', 'scsi-SQEMU_QEMU_HARDDISK_61dae38b-1d40-412d-9df6-8d9734e6ced8'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:45:47.775901 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--d153f4c4--5597--54b4--b460--41e490b92c19-osd--block--d153f4c4--5597--54b4--b460--41e490b92c19'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-zRfxmE-geHX-KaCf-Tjbv-h6oW-e94U-M8FcSh', 'scsi-0QEMU_QEMU_HARDDISK_0aeac9b9-4df2-4d9e-975e-68588115061e', 'scsi-SQEMU_QEMU_HARDDISK_0aeac9b9-4df2-4d9e-975e-68588115061e'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:45:47.775921 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_55ed4948-9fe5-49ab-9e57-6f6f508ce8e3', 'scsi-SQEMU_QEMU_HARDDISK_55ed4948-9fe5-49ab-9e57-6f6f508ce8e3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:45:47.775940 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-13-22-38-26-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 23:45:47.775951 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:45:47.775962 | orchestrator | 2025-05-13 23:45:47.775972 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-05-13 23:45:47.775983 | orchestrator | Tuesday 13 May 2025 23:43:56 +0000 (0:00:00.594) 0:00:17.318 *********** 2025-05-13 23:45:47.775994 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:45:47.776005 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:45:47.776016 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:45:47.776027 | orchestrator | 2025-05-13 23:45:47.776037 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-05-13 23:45:47.776048 | orchestrator | Tuesday 13 May 2025 23:43:57 +0000 (0:00:00.715) 0:00:18.034 *********** 2025-05-13 23:45:47.776059 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:45:47.776070 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:45:47.776080 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:45:47.776091 | orchestrator | 2025-05-13 23:45:47.776102 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-05-13 23:45:47.776113 | orchestrator | Tuesday 13 May 2025 23:43:57 +0000 (0:00:00.507) 0:00:18.541 *********** 2025-05-13 23:45:47.776123 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:45:47.776134 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:45:47.776144 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:45:47.776155 | orchestrator | 2025-05-13 23:45:47.776165 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-05-13 23:45:47.776176 | orchestrator | Tuesday 13 May 2025 23:43:58 +0000 (0:00:00.638) 0:00:19.180 *********** 2025-05-13 23:45:47.776187 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:45:47.776198 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:45:47.776217 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:45:47.776228 | orchestrator | 2025-05-13 23:45:47.776239 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-05-13 23:45:47.776249 | orchestrator | Tuesday 13 May 2025 23:43:58 +0000 (0:00:00.291) 0:00:19.472 *********** 2025-05-13 23:45:47.776260 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:45:47.776271 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:45:47.776281 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:45:47.776292 | orchestrator | 2025-05-13 23:45:47.776303 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-05-13 23:45:47.776313 | orchestrator | Tuesday 13 May 2025 23:43:58 +0000 (0:00:00.423) 0:00:19.896 *********** 2025-05-13 23:45:47.776324 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:45:47.776335 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:45:47.776345 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:45:47.776356 | orchestrator | 2025-05-13 23:45:47.776367 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-05-13 23:45:47.776378 | orchestrator | Tuesday 13 May 2025 23:43:59 +0000 (0:00:00.532) 0:00:20.429 *********** 2025-05-13 23:45:47.776388 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-05-13 23:45:47.776399 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-05-13 23:45:47.776410 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-05-13 23:45:47.776421 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-05-13 23:45:47.776431 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-05-13 23:45:47.776442 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-05-13 23:45:47.776452 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-05-13 23:45:47.776463 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-05-13 23:45:47.776478 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-05-13 23:45:47.776489 | orchestrator | 2025-05-13 23:45:47.776500 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-05-13 23:45:47.776510 | orchestrator | Tuesday 13 May 2025 23:44:00 +0000 (0:00:00.869) 0:00:21.299 *********** 2025-05-13 23:45:47.776521 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-13 23:45:47.776531 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-13 23:45:47.776542 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-13 23:45:47.776553 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:45:47.776564 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-13 23:45:47.776574 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-13 23:45:47.776613 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-13 23:45:47.776624 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:45:47.776635 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-13 23:45:47.776646 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-13 23:45:47.776656 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-13 23:45:47.776667 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:45:47.776678 | orchestrator | 2025-05-13 23:45:47.776688 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-05-13 23:45:47.776699 | orchestrator | Tuesday 13 May 2025 23:44:00 +0000 (0:00:00.357) 0:00:21.656 *********** 2025-05-13 23:45:47.776711 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:45:47.776722 | orchestrator | 2025-05-13 23:45:47.776733 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-13 23:45:47.776745 | orchestrator | Tuesday 13 May 2025 23:44:01 +0000 (0:00:00.715) 0:00:22.371 *********** 2025-05-13 23:45:47.776755 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:45:47.776766 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:45:47.776784 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:45:47.776795 | orchestrator | 2025-05-13 23:45:47.776812 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-13 23:45:47.776823 | orchestrator | Tuesday 13 May 2025 23:44:01 +0000 (0:00:00.356) 0:00:22.728 *********** 2025-05-13 23:45:47.776834 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:45:47.776845 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:45:47.776855 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:45:47.776866 | orchestrator | 2025-05-13 23:45:47.776877 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-13 23:45:47.776887 | orchestrator | Tuesday 13 May 2025 23:44:02 +0000 (0:00:00.320) 0:00:23.048 *********** 2025-05-13 23:45:47.776898 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:45:47.776908 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:45:47.776919 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:45:47.776930 | orchestrator | 2025-05-13 23:45:47.776940 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-05-13 23:45:47.776951 | orchestrator | Tuesday 13 May 2025 23:44:02 +0000 (0:00:00.299) 0:00:23.348 *********** 2025-05-13 23:45:47.776962 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:45:47.776973 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:45:47.776983 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:45:47.776994 | orchestrator | 2025-05-13 23:45:47.777004 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-05-13 23:45:47.777015 | orchestrator | Tuesday 13 May 2025 23:44:03 +0000 (0:00:00.672) 0:00:24.020 *********** 2025-05-13 23:45:47.777026 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-13 23:45:47.777036 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-13 23:45:47.777047 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-13 23:45:47.777058 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:45:47.777068 | orchestrator | 2025-05-13 23:45:47.777079 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-13 23:45:47.777090 | orchestrator | Tuesday 13 May 2025 23:44:03 +0000 (0:00:00.426) 0:00:24.446 *********** 2025-05-13 23:45:47.777100 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-13 23:45:47.777110 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-13 23:45:47.777121 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-13 23:45:47.777131 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:45:47.777142 | orchestrator | 2025-05-13 23:45:47.777153 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-13 23:45:47.777163 | orchestrator | Tuesday 13 May 2025 23:44:03 +0000 (0:00:00.390) 0:00:24.837 *********** 2025-05-13 23:45:47.777174 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-13 23:45:47.777184 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-13 23:45:47.777195 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-13 23:45:47.777205 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:45:47.777216 | orchestrator | 2025-05-13 23:45:47.777227 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-05-13 23:45:47.777237 | orchestrator | Tuesday 13 May 2025 23:44:04 +0000 (0:00:00.365) 0:00:25.203 *********** 2025-05-13 23:45:47.777248 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:45:47.777258 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:45:47.777269 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:45:47.777280 | orchestrator | 2025-05-13 23:45:47.777290 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-05-13 23:45:47.777301 | orchestrator | Tuesday 13 May 2025 23:44:04 +0000 (0:00:00.327) 0:00:25.530 *********** 2025-05-13 23:45:47.777311 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-05-13 23:45:47.777322 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-05-13 23:45:47.777333 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-05-13 23:45:47.777350 | orchestrator | 2025-05-13 23:45:47.777371 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-05-13 23:45:47.777382 | orchestrator | Tuesday 13 May 2025 23:44:05 +0000 (0:00:00.500) 0:00:26.031 *********** 2025-05-13 23:45:47.777393 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-13 23:45:47.777404 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-13 23:45:47.777414 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-13 23:45:47.777425 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-05-13 23:45:47.777435 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-13 23:45:47.777446 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-13 23:45:47.777457 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-13 23:45:47.777467 | orchestrator | 2025-05-13 23:45:47.777478 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-05-13 23:45:47.777488 | orchestrator | Tuesday 13 May 2025 23:44:06 +0000 (0:00:01.067) 0:00:27.099 *********** 2025-05-13 23:45:47.777499 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-13 23:45:47.777509 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-13 23:45:47.777520 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-13 23:45:47.777530 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-05-13 23:45:47.777541 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-13 23:45:47.777551 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-13 23:45:47.777562 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-13 23:45:47.777572 | orchestrator | 2025-05-13 23:45:47.777610 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-05-13 23:45:47.777622 | orchestrator | Tuesday 13 May 2025 23:44:08 +0000 (0:00:01.948) 0:00:29.047 *********** 2025-05-13 23:45:47.777633 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:45:47.777644 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:45:47.777654 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-05-13 23:45:47.777665 | orchestrator | 2025-05-13 23:45:47.777676 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-05-13 23:45:47.777687 | orchestrator | Tuesday 13 May 2025 23:44:08 +0000 (0:00:00.418) 0:00:29.465 *********** 2025-05-13 23:45:47.777698 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-13 23:45:47.777710 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-13 23:45:47.777721 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-13 23:45:47.777733 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-13 23:45:47.777751 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-13 23:45:47.777762 | orchestrator | 2025-05-13 23:45:47.777773 | orchestrator | TASK [generate keys] *********************************************************** 2025-05-13 23:45:47.777784 | orchestrator | Tuesday 13 May 2025 23:44:52 +0000 (0:00:43.784) 0:01:13.250 *********** 2025-05-13 23:45:47.777794 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-13 23:45:47.777805 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-13 23:45:47.777815 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-13 23:45:47.777826 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-13 23:45:47.777836 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-13 23:45:47.777852 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-13 23:45:47.777862 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-05-13 23:45:47.777873 | orchestrator | 2025-05-13 23:45:47.777884 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-05-13 23:45:47.777895 | orchestrator | Tuesday 13 May 2025 23:45:15 +0000 (0:00:23.261) 0:01:36.511 *********** 2025-05-13 23:45:47.777905 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-13 23:45:47.777916 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-13 23:45:47.777926 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-13 23:45:47.777937 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-13 23:45:47.777948 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-13 23:45:47.777958 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-13 23:45:47.777969 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-13 23:45:47.777980 | orchestrator | 2025-05-13 23:45:47.777990 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-05-13 23:45:47.778001 | orchestrator | Tuesday 13 May 2025 23:45:27 +0000 (0:00:11.947) 0:01:48.459 *********** 2025-05-13 23:45:47.778011 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-13 23:45:47.778053 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-13 23:45:47.778064 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-13 23:45:47.778075 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-13 23:45:47.778085 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-13 23:45:47.778096 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-13 23:45:47.778114 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-13 23:45:47.778124 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-13 23:45:47.778135 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-13 23:45:47.778146 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-13 23:45:47.778156 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-13 23:45:47.778167 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-13 23:45:47.778177 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-13 23:45:47.778195 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-13 23:45:47.778206 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-13 23:45:47.778217 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-13 23:45:47.778228 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-13 23:45:47.778238 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-13 23:45:47.778249 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-05-13 23:45:47.778260 | orchestrator | 2025-05-13 23:45:47.778270 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 23:45:47.778281 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2025-05-13 23:45:47.778292 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-05-13 23:45:47.778303 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-05-13 23:45:47.778315 | orchestrator | 2025-05-13 23:45:47.778326 | orchestrator | 2025-05-13 23:45:47.778336 | orchestrator | 2025-05-13 23:45:47.778347 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 23:45:47.778357 | orchestrator | Tuesday 13 May 2025 23:45:44 +0000 (0:00:16.790) 0:02:05.249 *********** 2025-05-13 23:45:47.778368 | orchestrator | =============================================================================== 2025-05-13 23:45:47.778379 | orchestrator | create openstack pool(s) ----------------------------------------------- 43.78s 2025-05-13 23:45:47.778390 | orchestrator | generate keys ---------------------------------------------------------- 23.26s 2025-05-13 23:45:47.778401 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 16.79s 2025-05-13 23:45:47.778411 | orchestrator | get keys from monitors ------------------------------------------------- 11.95s 2025-05-13 23:45:47.778422 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.14s 2025-05-13 23:45:47.778432 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.95s 2025-05-13 23:45:47.778443 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.65s 2025-05-13 23:45:47.778454 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 1.07s 2025-05-13 23:45:47.778464 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.87s 2025-05-13 23:45:47.778480 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.84s 2025-05-13 23:45:47.778491 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.81s 2025-05-13 23:45:47.778502 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.72s 2025-05-13 23:45:47.778512 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.72s 2025-05-13 23:45:47.778523 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.71s 2025-05-13 23:45:47.778534 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.67s 2025-05-13 23:45:47.778544 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.66s 2025-05-13 23:45:47.778555 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.64s 2025-05-13 23:45:47.778565 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.64s 2025-05-13 23:45:47.778576 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.63s 2025-05-13 23:45:47.778638 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.59s 2025-05-13 23:45:47.778649 | orchestrator | 2025-05-13 23:45:47 | INFO  | Task b6b060d5-0ecd-4336-b2b2-f1d68f5f38a6 is in state STARTED 2025-05-13 23:45:47.778668 | orchestrator | 2025-05-13 23:45:47 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:45:50.816670 | orchestrator | 2025-05-13 23:45:50 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state STARTED 2025-05-13 23:45:50.817289 | orchestrator | 2025-05-13 23:45:50 | INFO  | Task b6b060d5-0ecd-4336-b2b2-f1d68f5f38a6 is in state STARTED 2025-05-13 23:45:50.817320 | orchestrator | 2025-05-13 23:45:50 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:45:53.882412 | orchestrator | 2025-05-13 23:45:53 | INFO  | Task e8fc9090-5017-4c00-8cd9-365226f8f094 is in state STARTED 2025-05-13 23:45:53.887355 | orchestrator | 2025-05-13 23:45:53.887563 | orchestrator | 2025-05-13 23:45:53.887671 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-05-13 23:45:53.887697 | orchestrator | 2025-05-13 23:45:53.887718 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-05-13 23:45:53.887740 | orchestrator | Tuesday 13 May 2025 23:41:39 +0000 (0:00:00.106) 0:00:00.106 *********** 2025-05-13 23:45:53.887761 | orchestrator | ok: [localhost] => { 2025-05-13 23:45:53.887783 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-05-13 23:45:53.887805 | orchestrator | } 2025-05-13 23:45:53.887828 | orchestrator | 2025-05-13 23:45:53.887849 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-05-13 23:45:53.887870 | orchestrator | Tuesday 13 May 2025 23:41:39 +0000 (0:00:00.047) 0:00:00.153 *********** 2025-05-13 23:45:53.887890 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-05-13 23:45:53.887914 | orchestrator | ...ignoring 2025-05-13 23:45:53.887935 | orchestrator | 2025-05-13 23:45:53.887956 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-05-13 23:45:53.887977 | orchestrator | Tuesday 13 May 2025 23:41:41 +0000 (0:00:02.898) 0:00:03.052 *********** 2025-05-13 23:45:53.887997 | orchestrator | skipping: [localhost] 2025-05-13 23:45:53.888017 | orchestrator | 2025-05-13 23:45:53.888038 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-05-13 23:45:53.888110 | orchestrator | Tuesday 13 May 2025 23:41:42 +0000 (0:00:00.054) 0:00:03.107 *********** 2025-05-13 23:45:53.888129 | orchestrator | ok: [localhost] 2025-05-13 23:45:53.888149 | orchestrator | 2025-05-13 23:45:53.888168 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-13 23:45:53.888187 | orchestrator | 2025-05-13 23:45:53.888206 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-13 23:45:53.888224 | orchestrator | Tuesday 13 May 2025 23:41:42 +0000 (0:00:00.176) 0:00:03.284 *********** 2025-05-13 23:45:53.888242 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:45:53.888260 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:45:53.888278 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:45:53.888297 | orchestrator | 2025-05-13 23:45:53.888316 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-13 23:45:53.888335 | orchestrator | Tuesday 13 May 2025 23:41:42 +0000 (0:00:00.354) 0:00:03.638 *********** 2025-05-13 23:45:53.888355 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-05-13 23:45:53.888374 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-05-13 23:45:53.888391 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-05-13 23:45:53.888410 | orchestrator | 2025-05-13 23:45:53.888430 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-05-13 23:45:53.888448 | orchestrator | 2025-05-13 23:45:53.888467 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-05-13 23:45:53.888486 | orchestrator | Tuesday 13 May 2025 23:41:43 +0000 (0:00:00.590) 0:00:04.229 *********** 2025-05-13 23:45:53.888505 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-13 23:45:53.888556 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-05-13 23:45:53.888613 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-05-13 23:45:53.888634 | orchestrator | 2025-05-13 23:45:53.888652 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-13 23:45:53.888672 | orchestrator | Tuesday 13 May 2025 23:41:43 +0000 (0:00:00.555) 0:00:04.784 *********** 2025-05-13 23:45:53.888690 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:45:53.888710 | orchestrator | 2025-05-13 23:45:53.888747 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-05-13 23:45:53.888765 | orchestrator | Tuesday 13 May 2025 23:41:44 +0000 (0:00:00.629) 0:00:05.414 *********** 2025-05-13 23:45:53.888816 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-13 23:45:53.888840 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-13 23:45:53.888871 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-13 23:45:53.888885 | orchestrator | 2025-05-13 23:45:53.888907 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-05-13 23:45:53.888919 | orchestrator | Tuesday 13 May 2025 23:41:47 +0000 (0:00:03.286) 0:00:08.700 *********** 2025-05-13 23:45:53.888930 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:45:53.888942 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:45:53.888952 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:45:53.888963 | orchestrator | 2025-05-13 23:45:53.888974 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-05-13 23:45:53.888985 | orchestrator | Tuesday 13 May 2025 23:41:48 +0000 (0:00:00.842) 0:00:09.543 *********** 2025-05-13 23:45:53.888995 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:45:53.889006 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:45:53.889016 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:45:53.889027 | orchestrator | 2025-05-13 23:45:53.889037 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-05-13 23:45:53.889048 | orchestrator | Tuesday 13 May 2025 23:41:49 +0000 (0:00:01.443) 0:00:10.986 *********** 2025-05-13 23:45:53.889059 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-13 23:45:53.889096 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-13 23:45:53.889110 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-13 23:45:53.889129 | orchestrator | 2025-05-13 23:45:53.889140 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-05-13 23:45:53.889151 | orchestrator | Tuesday 13 May 2025 23:41:53 +0000 (0:00:03.757) 0:00:14.744 *********** 2025-05-13 23:45:53.889162 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:45:53.889173 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:45:53.889184 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:45:53.889194 | orchestrator | 2025-05-13 23:45:53.889205 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-05-13 23:45:53.889216 | orchestrator | Tuesday 13 May 2025 23:41:54 +0000 (0:00:01.098) 0:00:15.842 *********** 2025-05-13 23:45:53.889226 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:45:53.889237 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:45:53.889248 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:45:53.889335 | orchestrator | 2025-05-13 23:45:53.889353 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-13 23:45:53.889365 | orchestrator | Tuesday 13 May 2025 23:41:58 +0000 (0:00:03.744) 0:00:19.586 *********** 2025-05-13 23:45:53.889375 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:45:53.889386 | orchestrator | 2025-05-13 23:45:53.889396 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-05-13 23:45:53.889407 | orchestrator | Tuesday 13 May 2025 23:41:59 +0000 (0:00:00.577) 0:00:20.164 *********** 2025-05-13 23:45:53.889429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-13 23:45:53.889450 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:45:53.889468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-13 23:45:53.889480 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:45:53.889499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-13 23:45:53.889512 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:45:53.889523 | orchestrator | 2025-05-13 23:45:53.889533 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-05-13 23:45:53.889551 | orchestrator | Tuesday 13 May 2025 23:42:03 +0000 (0:00:04.401) 0:00:24.565 *********** 2025-05-13 23:45:53.889563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-13 23:45:53.889574 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:45:53.889643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-13 23:45:53.889665 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:45:53.889684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-13 23:45:53.889709 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:45:53.889720 | orchestrator | 2025-05-13 23:45:53.889731 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-05-13 23:45:53.889742 | orchestrator | Tuesday 13 May 2025 23:42:06 +0000 (0:00:03.363) 0:00:27.929 *********** 2025-05-13 23:45:53.889759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-13 23:45:53.889772 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:45:53.889793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-13 23:45:53.889819 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:45:53.889837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-13 23:45:53.889849 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:45:53.889860 | orchestrator | 2025-05-13 23:45:53.889871 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-05-13 23:45:53.889881 | orchestrator | Tuesday 13 May 2025 23:42:11 +0000 (0:00:04.561) 0:00:32.491 *********** 2025-05-13 23:45:53.889900 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/l2025-05-13 23:45:53 | INFO  | Task e3a94d85-942a-4539-983d-3a4a13b619db is in state SUCCESS 2025-05-13 23:45:53.889923 | orchestrator | ocaltime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-13 23:45:53.889943 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-13 23:45:53.889965 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-13 23:45:53.889986 | orchestrator | 2025-05-13 23:45:53.889997 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-05-13 23:45:53.890008 | orchestrator | Tuesday 13 May 2025 23:42:15 +0000 (0:00:03.999) 0:00:36.491 *********** 2025-05-13 23:45:53.890075 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:45:53.890090 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:45:53.890101 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:45:53.890112 | orchestrator | 2025-05-13 23:45:53.890123 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-05-13 23:45:53.890133 | orchestrator | Tuesday 13 May 2025 23:42:16 +0000 (0:00:00.947) 0:00:37.438 *********** 2025-05-13 23:45:53.890144 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:45:53.890155 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:45:53.890166 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:45:53.890176 | orchestrator | 2025-05-13 23:45:53.890187 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-05-13 23:45:53.890198 | orchestrator | Tuesday 13 May 2025 23:42:16 +0000 (0:00:00.284) 0:00:37.723 *********** 2025-05-13 23:45:53.890209 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:45:53.890219 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:45:53.890230 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:45:53.890241 | orchestrator | 2025-05-13 23:45:53.890252 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-05-13 23:45:53.890263 | orchestrator | Tuesday 13 May 2025 23:42:16 +0000 (0:00:00.314) 0:00:38.037 *********** 2025-05-13 23:45:53.890275 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-05-13 23:45:53.890287 | orchestrator | ...ignoring 2025-05-13 23:45:53.890298 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-05-13 23:45:53.890308 | orchestrator | ...ignoring 2025-05-13 23:45:53.890325 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-05-13 23:45:53.890336 | orchestrator | ...ignoring 2025-05-13 23:45:53.890347 | orchestrator | 2025-05-13 23:45:53.890358 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-05-13 23:45:53.890369 | orchestrator | Tuesday 13 May 2025 23:42:27 +0000 (0:00:10.844) 0:00:48.881 *********** 2025-05-13 23:45:53.890380 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:45:53.890391 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:45:53.890402 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:45:53.890421 | orchestrator | 2025-05-13 23:45:53.890431 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-05-13 23:45:53.890442 | orchestrator | Tuesday 13 May 2025 23:42:28 +0000 (0:00:00.678) 0:00:49.560 *********** 2025-05-13 23:45:53.890453 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:45:53.890464 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:45:53.890475 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:45:53.890485 | orchestrator | 2025-05-13 23:45:53.890496 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-05-13 23:45:53.890507 | orchestrator | Tuesday 13 May 2025 23:42:28 +0000 (0:00:00.453) 0:00:50.013 *********** 2025-05-13 23:45:53.890518 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:45:53.890529 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:45:53.890539 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:45:53.890550 | orchestrator | 2025-05-13 23:45:53.890561 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-05-13 23:45:53.890571 | orchestrator | Tuesday 13 May 2025 23:42:29 +0000 (0:00:00.438) 0:00:50.452 *********** 2025-05-13 23:45:53.890616 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:45:53.890628 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:45:53.890639 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:45:53.890650 | orchestrator | 2025-05-13 23:45:53.890661 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-05-13 23:45:53.890672 | orchestrator | Tuesday 13 May 2025 23:42:29 +0000 (0:00:00.445) 0:00:50.898 *********** 2025-05-13 23:45:53.890683 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:45:53.890694 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:45:53.890704 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:45:53.890715 | orchestrator | 2025-05-13 23:45:53.890735 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-05-13 23:45:53.890746 | orchestrator | Tuesday 13 May 2025 23:42:30 +0000 (0:00:00.654) 0:00:51.552 *********** 2025-05-13 23:45:53.890756 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:45:53.890767 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:45:53.890778 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:45:53.890788 | orchestrator | 2025-05-13 23:45:53.890799 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-13 23:45:53.890810 | orchestrator | Tuesday 13 May 2025 23:42:30 +0000 (0:00:00.478) 0:00:52.031 *********** 2025-05-13 23:45:53.890820 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:45:53.890831 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:45:53.890842 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-05-13 23:45:53.890853 | orchestrator | 2025-05-13 23:45:53.890863 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-05-13 23:45:53.890874 | orchestrator | Tuesday 13 May 2025 23:42:31 +0000 (0:00:00.417) 0:00:52.449 *********** 2025-05-13 23:45:53.890885 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:45:53.890896 | orchestrator | 2025-05-13 23:45:53.890907 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-05-13 23:45:53.890917 | orchestrator | Tuesday 13 May 2025 23:42:51 +0000 (0:00:20.608) 0:01:13.057 *********** 2025-05-13 23:45:53.890928 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:45:53.890939 | orchestrator | 2025-05-13 23:45:53.890949 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-13 23:45:53.890960 | orchestrator | Tuesday 13 May 2025 23:42:52 +0000 (0:00:00.132) 0:01:13.189 *********** 2025-05-13 23:45:53.890970 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:45:53.890992 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:45:53.891004 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:45:53.891014 | orchestrator | 2025-05-13 23:45:53.891025 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-05-13 23:45:53.891036 | orchestrator | Tuesday 13 May 2025 23:42:53 +0000 (0:00:01.021) 0:01:14.211 *********** 2025-05-13 23:45:53.891055 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:45:53.891066 | orchestrator | 2025-05-13 23:45:53.891077 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-05-13 23:45:53.891088 | orchestrator | Tuesday 13 May 2025 23:43:01 +0000 (0:00:08.058) 0:01:22.269 *********** 2025-05-13 23:45:53.891098 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Wait for first MariaDB service port liveness (10 retries left). 2025-05-13 23:45:53.891109 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:45:53.891119 | orchestrator | 2025-05-13 23:45:53.891130 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-05-13 23:45:53.891140 | orchestrator | Tuesday 13 May 2025 23:43:12 +0000 (0:00:11.577) 0:01:33.847 *********** 2025-05-13 23:45:53.891151 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:45:53.891162 | orchestrator | 2025-05-13 23:45:53.891172 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-05-13 23:45:53.891183 | orchestrator | Tuesday 13 May 2025 23:43:15 +0000 (0:00:02.649) 0:01:36.496 *********** 2025-05-13 23:45:53.891194 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:45:53.891204 | orchestrator | 2025-05-13 23:45:53.891214 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-05-13 23:45:53.891225 | orchestrator | Tuesday 13 May 2025 23:43:15 +0000 (0:00:00.121) 0:01:36.618 *********** 2025-05-13 23:45:53.891236 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:45:53.891246 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:45:53.891257 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:45:53.891268 | orchestrator | 2025-05-13 23:45:53.891278 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-05-13 23:45:53.891289 | orchestrator | Tuesday 13 May 2025 23:43:16 +0000 (0:00:00.493) 0:01:37.111 *********** 2025-05-13 23:45:53.891305 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:45:53.891317 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-05-13 23:45:53.891327 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:45:53.891338 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:45:53.891349 | orchestrator | 2025-05-13 23:45:53.891360 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-05-13 23:45:53.891370 | orchestrator | skipping: no hosts matched 2025-05-13 23:45:53.891381 | orchestrator | 2025-05-13 23:45:53.891391 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-05-13 23:45:53.891402 | orchestrator | 2025-05-13 23:45:53.891412 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-05-13 23:45:53.891423 | orchestrator | Tuesday 13 May 2025 23:43:16 +0000 (0:00:00.314) 0:01:37.425 *********** 2025-05-13 23:45:53.891433 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:45:53.891444 | orchestrator | 2025-05-13 23:45:53.891455 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-05-13 23:45:53.891465 | orchestrator | Tuesday 13 May 2025 23:43:35 +0000 (0:00:19.256) 0:01:56.682 *********** 2025-05-13 23:45:53.891476 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:45:53.891486 | orchestrator | 2025-05-13 23:45:53.891497 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-05-13 23:45:53.891508 | orchestrator | Tuesday 13 May 2025 23:44:10 +0000 (0:00:34.562) 0:02:31.244 *********** 2025-05-13 23:45:53.891518 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:45:53.891529 | orchestrator | 2025-05-13 23:45:53.891540 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-05-13 23:45:53.891550 | orchestrator | 2025-05-13 23:45:53.891561 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-05-13 23:45:53.891572 | orchestrator | Tuesday 13 May 2025 23:44:12 +0000 (0:00:02.429) 0:02:33.673 *********** 2025-05-13 23:45:53.891653 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:45:53.891665 | orchestrator | 2025-05-13 23:45:53.891676 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-05-13 23:45:53.891686 | orchestrator | Tuesday 13 May 2025 23:44:32 +0000 (0:00:19.628) 0:02:53.302 *********** 2025-05-13 23:45:53.891709 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:45:53.891719 | orchestrator | 2025-05-13 23:45:53.891737 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-05-13 23:45:53.891748 | orchestrator | Tuesday 13 May 2025 23:45:07 +0000 (0:00:35.632) 0:03:28.934 *********** 2025-05-13 23:45:53.891759 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:45:53.891769 | orchestrator | 2025-05-13 23:45:53.891780 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-05-13 23:45:53.891791 | orchestrator | 2025-05-13 23:45:53.891801 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-05-13 23:45:53.891812 | orchestrator | Tuesday 13 May 2025 23:45:10 +0000 (0:00:02.852) 0:03:31.787 *********** 2025-05-13 23:45:53.891823 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:45:53.891833 | orchestrator | 2025-05-13 23:45:53.891844 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-05-13 23:45:53.891855 | orchestrator | Tuesday 13 May 2025 23:45:28 +0000 (0:00:17.660) 0:03:49.447 *********** 2025-05-13 23:45:53.891866 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Wait for MariaDB service port liveness (10 retries left). 2025-05-13 23:45:53.891876 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:45:53.891887 | orchestrator | 2025-05-13 23:45:53.891898 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-05-13 23:45:53.891909 | orchestrator | Tuesday 13 May 2025 23:45:36 +0000 (0:00:08.017) 0:03:57.465 *********** 2025-05-13 23:45:53.891920 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:45:53.891930 | orchestrator | 2025-05-13 23:45:53.891941 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-05-13 23:45:53.891952 | orchestrator | 2025-05-13 23:45:53.891962 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-05-13 23:45:53.891973 | orchestrator | Tuesday 13 May 2025 23:45:38 +0000 (0:00:02.344) 0:03:59.810 *********** 2025-05-13 23:45:53.891984 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:45:53.891994 | orchestrator | 2025-05-13 23:45:53.892005 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-05-13 23:45:53.892015 | orchestrator | Tuesday 13 May 2025 23:45:39 +0000 (0:00:00.475) 0:04:00.285 *********** 2025-05-13 23:45:53.892026 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:45:53.892035 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:45:53.892045 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:45:53.892054 | orchestrator | 2025-05-13 23:45:53.892063 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-05-13 23:45:53.892073 | orchestrator | Tuesday 13 May 2025 23:45:41 +0000 (0:00:02.253) 0:04:02.539 *********** 2025-05-13 23:45:53.892082 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:45:53.892092 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:45:53.892101 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:45:53.892111 | orchestrator | 2025-05-13 23:45:53.892120 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-05-13 23:45:53.892129 | orchestrator | Tuesday 13 May 2025 23:45:43 +0000 (0:00:01.928) 0:04:04.468 *********** 2025-05-13 23:45:53.892139 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:45:53.892148 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:45:53.892158 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:45:53.892167 | orchestrator | 2025-05-13 23:45:53.892176 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-05-13 23:45:53.892186 | orchestrator | Tuesday 13 May 2025 23:45:45 +0000 (0:00:02.085) 0:04:06.553 *********** 2025-05-13 23:45:53.892195 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:45:53.892205 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:45:53.892215 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:45:53.892224 | orchestrator | 2025-05-13 23:45:53.892234 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-05-13 23:45:53.892251 | orchestrator | Tuesday 13 May 2025 23:45:47 +0000 (0:00:02.149) 0:04:08.702 *********** 2025-05-13 23:45:53.892261 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:45:53.892270 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:45:53.892286 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:45:53.892295 | orchestrator | 2025-05-13 23:45:53.892305 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-05-13 23:45:53.892315 | orchestrator | Tuesday 13 May 2025 23:45:50 +0000 (0:00:02.983) 0:04:11.686 *********** 2025-05-13 23:45:53.892325 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:45:53.892334 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:45:53.892344 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:45:53.892354 | orchestrator | 2025-05-13 23:45:53.892363 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 23:45:53.892373 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-05-13 23:45:53.892383 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2025-05-13 23:45:53.892394 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-05-13 23:45:53.892404 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-05-13 23:45:53.892413 | orchestrator | 2025-05-13 23:45:53.892423 | orchestrator | 2025-05-13 23:45:53.892432 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 23:45:53.892442 | orchestrator | Tuesday 13 May 2025 23:45:50 +0000 (0:00:00.227) 0:04:11.914 *********** 2025-05-13 23:45:53.892451 | orchestrator | =============================================================================== 2025-05-13 23:45:53.892460 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 70.20s 2025-05-13 23:45:53.892470 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 38.88s 2025-05-13 23:45:53.892486 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 20.61s 2025-05-13 23:45:53.892495 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 17.66s 2025-05-13 23:45:53.892505 | orchestrator | mariadb : Wait for first MariaDB service port liveness ----------------- 11.58s 2025-05-13 23:45:53.892514 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.84s 2025-05-13 23:45:53.892523 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 8.06s 2025-05-13 23:45:53.892533 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 8.02s 2025-05-13 23:45:53.892542 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.28s 2025-05-13 23:45:53.892552 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 4.56s 2025-05-13 23:45:53.892561 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 4.40s 2025-05-13 23:45:53.892571 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 4.00s 2025-05-13 23:45:53.892601 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.76s 2025-05-13 23:45:53.892611 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 3.74s 2025-05-13 23:45:53.892620 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 3.36s 2025-05-13 23:45:53.892630 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.29s 2025-05-13 23:45:53.892639 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.98s 2025-05-13 23:45:53.892649 | orchestrator | Check MariaDB service --------------------------------------------------- 2.90s 2025-05-13 23:45:53.892658 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.65s 2025-05-13 23:45:53.892674 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.34s 2025-05-13 23:45:53.892684 | orchestrator | 2025-05-13 23:45:53 | INFO  | Task b8186ed4-a416-4fa4-8b1e-cccf6f2ea0b1 is in state STARTED 2025-05-13 23:45:53.892694 | orchestrator | 2025-05-13 23:45:53 | INFO  | Task b6b060d5-0ecd-4336-b2b2-f1d68f5f38a6 is in state STARTED 2025-05-13 23:45:53.892703 | orchestrator | 2025-05-13 23:45:53 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:45:56.951523 | orchestrator | 2025-05-13 23:45:56 | INFO  | Task e8fc9090-5017-4c00-8cd9-365226f8f094 is in state STARTED 2025-05-13 23:45:56.951880 | orchestrator | 2025-05-13 23:45:56 | INFO  | Task b8186ed4-a416-4fa4-8b1e-cccf6f2ea0b1 is in state STARTED 2025-05-13 23:45:56.953171 | orchestrator | 2025-05-13 23:45:56 | INFO  | Task b6b060d5-0ecd-4336-b2b2-f1d68f5f38a6 is in state STARTED 2025-05-13 23:45:56.953196 | orchestrator | 2025-05-13 23:45:56 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:45:59.991382 | orchestrator | 2025-05-13 23:45:59 | INFO  | Task e8fc9090-5017-4c00-8cd9-365226f8f094 is in state STARTED 2025-05-13 23:45:59.991674 | orchestrator | 2025-05-13 23:45:59 | INFO  | Task b8186ed4-a416-4fa4-8b1e-cccf6f2ea0b1 is in state STARTED 2025-05-13 23:45:59.992394 | orchestrator | 2025-05-13 23:45:59 | INFO  | Task b6b060d5-0ecd-4336-b2b2-f1d68f5f38a6 is in state STARTED 2025-05-13 23:45:59.993008 | orchestrator | 2025-05-13 23:45:59 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:46:03.047413 | orchestrator | 2025-05-13 23:46:03 | INFO  | Task e8fc9090-5017-4c00-8cd9-365226f8f094 is in state STARTED 2025-05-13 23:46:03.048152 | orchestrator | 2025-05-13 23:46:03 | INFO  | Task b8186ed4-a416-4fa4-8b1e-cccf6f2ea0b1 is in state STARTED 2025-05-13 23:46:03.050263 | orchestrator | 2025-05-13 23:46:03 | INFO  | Task b6b060d5-0ecd-4336-b2b2-f1d68f5f38a6 is in state STARTED 2025-05-13 23:46:03.050481 | orchestrator | 2025-05-13 23:46:03 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:46:06.091851 | orchestrator | 2025-05-13 23:46:06 | INFO  | Task e8fc9090-5017-4c00-8cd9-365226f8f094 is in state STARTED 2025-05-13 23:46:06.092755 | orchestrator | 2025-05-13 23:46:06 | INFO  | Task b8186ed4-a416-4fa4-8b1e-cccf6f2ea0b1 is in state STARTED 2025-05-13 23:46:06.094291 | orchestrator | 2025-05-13 23:46:06 | INFO  | Task b6b060d5-0ecd-4336-b2b2-f1d68f5f38a6 is in state STARTED 2025-05-13 23:46:06.094366 | orchestrator | 2025-05-13 23:46:06 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:46:09.133730 | orchestrator | 2025-05-13 23:46:09 | INFO  | Task e8fc9090-5017-4c00-8cd9-365226f8f094 is in state STARTED 2025-05-13 23:46:09.133843 | orchestrator | 2025-05-13 23:46:09 | INFO  | Task b8186ed4-a416-4fa4-8b1e-cccf6f2ea0b1 is in state STARTED 2025-05-13 23:46:09.133858 | orchestrator | 2025-05-13 23:46:09 | INFO  | Task b6b060d5-0ecd-4336-b2b2-f1d68f5f38a6 is in state STARTED 2025-05-13 23:46:09.133870 | orchestrator | 2025-05-13 23:46:09 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:46:12.180329 | orchestrator | 2025-05-13 23:46:12 | INFO  | Task e8fc9090-5017-4c00-8cd9-365226f8f094 is in state STARTED 2025-05-13 23:46:12.183684 | orchestrator | 2025-05-13 23:46:12 | INFO  | Task b8186ed4-a416-4fa4-8b1e-cccf6f2ea0b1 is in state STARTED 2025-05-13 23:46:12.187139 | orchestrator | 2025-05-13 23:46:12 | INFO  | Task b6b060d5-0ecd-4336-b2b2-f1d68f5f38a6 is in state STARTED 2025-05-13 23:46:12.190453 | orchestrator | 2025-05-13 23:46:12 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:46:15.239415 | orchestrator | 2025-05-13 23:46:15 | INFO  | Task e8fc9090-5017-4c00-8cd9-365226f8f094 is in state STARTED 2025-05-13 23:46:15.240152 | orchestrator | 2025-05-13 23:46:15 | INFO  | Task b8186ed4-a416-4fa4-8b1e-cccf6f2ea0b1 is in state STARTED 2025-05-13 23:46:15.241285 | orchestrator | 2025-05-13 23:46:15 | INFO  | Task b6b060d5-0ecd-4336-b2b2-f1d68f5f38a6 is in state SUCCESS 2025-05-13 23:46:15.242781 | orchestrator | 2025-05-13 23:46:15 | INFO  | Task a01f234b-89bf-4c39-a3f1-7657586e540e is in state STARTED 2025-05-13 23:46:15.242969 | orchestrator | 2025-05-13 23:46:15 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:46:18.288751 | orchestrator | 2025-05-13 23:46:18 | INFO  | Task e8fc9090-5017-4c00-8cd9-365226f8f094 is in state STARTED 2025-05-13 23:46:18.289199 | orchestrator | 2025-05-13 23:46:18 | INFO  | Task b8186ed4-a416-4fa4-8b1e-cccf6f2ea0b1 is in state STARTED 2025-05-13 23:46:18.290563 | orchestrator | 2025-05-13 23:46:18 | INFO  | Task a01f234b-89bf-4c39-a3f1-7657586e540e is in state STARTED 2025-05-13 23:46:18.290645 | orchestrator | 2025-05-13 23:46:18 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:46:21.327078 | orchestrator | 2025-05-13 23:46:21 | INFO  | Task e8fc9090-5017-4c00-8cd9-365226f8f094 is in state STARTED 2025-05-13 23:46:21.327180 | orchestrator | 2025-05-13 23:46:21 | INFO  | Task b8186ed4-a416-4fa4-8b1e-cccf6f2ea0b1 is in state STARTED 2025-05-13 23:46:21.327414 | orchestrator | 2025-05-13 23:46:21 | INFO  | Task a01f234b-89bf-4c39-a3f1-7657586e540e is in state STARTED 2025-05-13 23:46:21.327437 | orchestrator | 2025-05-13 23:46:21 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:46:24.376867 | orchestrator | 2025-05-13 23:46:24 | INFO  | Task e8fc9090-5017-4c00-8cd9-365226f8f094 is in state STARTED 2025-05-13 23:46:24.376963 | orchestrator | 2025-05-13 23:46:24 | INFO  | Task b8186ed4-a416-4fa4-8b1e-cccf6f2ea0b1 is in state STARTED 2025-05-13 23:46:24.380021 | orchestrator | 2025-05-13 23:46:24 | INFO  | Task a01f234b-89bf-4c39-a3f1-7657586e540e is in state STARTED 2025-05-13 23:46:24.380053 | orchestrator | 2025-05-13 23:46:24 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:46:27.433436 | orchestrator | 2025-05-13 23:46:27 | INFO  | Task e8fc9090-5017-4c00-8cd9-365226f8f094 is in state STARTED 2025-05-13 23:46:27.433848 | orchestrator | 2025-05-13 23:46:27 | INFO  | Task b8186ed4-a416-4fa4-8b1e-cccf6f2ea0b1 is in state STARTED 2025-05-13 23:46:27.434696 | orchestrator | 2025-05-13 23:46:27 | INFO  | Task a01f234b-89bf-4c39-a3f1-7657586e540e is in state STARTED 2025-05-13 23:46:27.434735 | orchestrator | 2025-05-13 23:46:27 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:46:30.480388 | orchestrator | 2025-05-13 23:46:30 | INFO  | Task e8fc9090-5017-4c00-8cd9-365226f8f094 is in state STARTED 2025-05-13 23:46:30.480514 | orchestrator | 2025-05-13 23:46:30 | INFO  | Task b8186ed4-a416-4fa4-8b1e-cccf6f2ea0b1 is in state STARTED 2025-05-13 23:46:30.482992 | orchestrator | 2025-05-13 23:46:30 | INFO  | Task a01f234b-89bf-4c39-a3f1-7657586e540e is in state STARTED 2025-05-13 23:46:30.483071 | orchestrator | 2025-05-13 23:46:30 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:46:33.531801 | orchestrator | 2025-05-13 23:46:33 | INFO  | Task e8fc9090-5017-4c00-8cd9-365226f8f094 is in state STARTED 2025-05-13 23:46:33.533558 | orchestrator | 2025-05-13 23:46:33 | INFO  | Task b8186ed4-a416-4fa4-8b1e-cccf6f2ea0b1 is in state STARTED 2025-05-13 23:46:33.535415 | orchestrator | 2025-05-13 23:46:33 | INFO  | Task a01f234b-89bf-4c39-a3f1-7657586e540e is in state STARTED 2025-05-13 23:46:33.535907 | orchestrator | 2025-05-13 23:46:33 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:46:36.586345 | orchestrator | 2025-05-13 23:46:36 | INFO  | Task e8fc9090-5017-4c00-8cd9-365226f8f094 is in state STARTED 2025-05-13 23:46:36.588514 | orchestrator | 2025-05-13 23:46:36 | INFO  | Task b8186ed4-a416-4fa4-8b1e-cccf6f2ea0b1 is in state STARTED 2025-05-13 23:46:36.590479 | orchestrator | 2025-05-13 23:46:36 | INFO  | Task a01f234b-89bf-4c39-a3f1-7657586e540e is in state STARTED 2025-05-13 23:46:36.590533 | orchestrator | 2025-05-13 23:46:36 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:46:39.630289 | orchestrator | 2025-05-13 23:46:39 | INFO  | Task e8fc9090-5017-4c00-8cd9-365226f8f094 is in state STARTED 2025-05-13 23:46:39.633124 | orchestrator | 2025-05-13 23:46:39 | INFO  | Task b8186ed4-a416-4fa4-8b1e-cccf6f2ea0b1 is in state STARTED 2025-05-13 23:46:39.635164 | orchestrator | 2025-05-13 23:46:39 | INFO  | Task a01f234b-89bf-4c39-a3f1-7657586e540e is in state STARTED 2025-05-13 23:46:39.635210 | orchestrator | 2025-05-13 23:46:39 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:46:42.681691 | orchestrator | 2025-05-13 23:46:42 | INFO  | Task e8fc9090-5017-4c00-8cd9-365226f8f094 is in state STARTED 2025-05-13 23:46:42.683185 | orchestrator | 2025-05-13 23:46:42 | INFO  | Task b8186ed4-a416-4fa4-8b1e-cccf6f2ea0b1 is in state STARTED 2025-05-13 23:46:42.685980 | orchestrator | 2025-05-13 23:46:42 | INFO  | Task a01f234b-89bf-4c39-a3f1-7657586e540e is in state STARTED 2025-05-13 23:46:42.686053 | orchestrator | 2025-05-13 23:46:42 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:46:45.751097 | orchestrator | 2025-05-13 23:46:45 | INFO  | Task e8fc9090-5017-4c00-8cd9-365226f8f094 is in state STARTED 2025-05-13 23:46:45.751634 | orchestrator | 2025-05-13 23:46:45 | INFO  | Task b8186ed4-a416-4fa4-8b1e-cccf6f2ea0b1 is in state STARTED 2025-05-13 23:46:45.752667 | orchestrator | 2025-05-13 23:46:45 | INFO  | Task a01f234b-89bf-4c39-a3f1-7657586e540e is in state STARTED 2025-05-13 23:46:45.752712 | orchestrator | 2025-05-13 23:46:45 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:46:48.808479 | orchestrator | 2025-05-13 23:46:48 | INFO  | Task e8fc9090-5017-4c00-8cd9-365226f8f094 is in state STARTED 2025-05-13 23:46:48.808797 | orchestrator | 2025-05-13 23:46:48 | INFO  | Task b8186ed4-a416-4fa4-8b1e-cccf6f2ea0b1 is in state STARTED 2025-05-13 23:46:48.809512 | orchestrator | 2025-05-13 23:46:48 | INFO  | Task a01f234b-89bf-4c39-a3f1-7657586e540e is in state STARTED 2025-05-13 23:46:48.809540 | orchestrator | 2025-05-13 23:46:48 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:46:51.855752 | orchestrator | 2025-05-13 23:46:51 | INFO  | Task e8fc9090-5017-4c00-8cd9-365226f8f094 is in state STARTED 2025-05-13 23:46:51.857708 | orchestrator | 2025-05-13 23:46:51 | INFO  | Task b8186ed4-a416-4fa4-8b1e-cccf6f2ea0b1 is in state STARTED 2025-05-13 23:46:51.859521 | orchestrator | 2025-05-13 23:46:51 | INFO  | Task a01f234b-89bf-4c39-a3f1-7657586e540e is in state STARTED 2025-05-13 23:46:51.859583 | orchestrator | 2025-05-13 23:46:51 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:46:54.908342 | orchestrator | 2025-05-13 23:46:54 | INFO  | Task e8fc9090-5017-4c00-8cd9-365226f8f094 is in state STARTED 2025-05-13 23:46:54.911640 | orchestrator | 2025-05-13 23:46:54 | INFO  | Task b8186ed4-a416-4fa4-8b1e-cccf6f2ea0b1 is in state STARTED 2025-05-13 23:46:54.914437 | orchestrator | 2025-05-13 23:46:54 | INFO  | Task a01f234b-89bf-4c39-a3f1-7657586e540e is in state STARTED 2025-05-13 23:46:54.914472 | orchestrator | 2025-05-13 23:46:54 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:46:57.958967 | orchestrator | 2025-05-13 23:46:57 | INFO  | Task e8fc9090-5017-4c00-8cd9-365226f8f094 is in state STARTED 2025-05-13 23:46:57.960236 | orchestrator | 2025-05-13 23:46:57 | INFO  | Task b8186ed4-a416-4fa4-8b1e-cccf6f2ea0b1 is in state STARTED 2025-05-13 23:46:57.961382 | orchestrator | 2025-05-13 23:46:57 | INFO  | Task a01f234b-89bf-4c39-a3f1-7657586e540e is in state STARTED 2025-05-13 23:46:57.961638 | orchestrator | 2025-05-13 23:46:57 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:47:01.021146 | orchestrator | 2025-05-13 23:47:01 | INFO  | Task e8fc9090-5017-4c00-8cd9-365226f8f094 is in state STARTED 2025-05-13 23:47:01.022118 | orchestrator | 2025-05-13 23:47:01 | INFO  | Task b8186ed4-a416-4fa4-8b1e-cccf6f2ea0b1 is in state STARTED 2025-05-13 23:47:01.025002 | orchestrator | 2025-05-13 23:47:01 | INFO  | Task a01f234b-89bf-4c39-a3f1-7657586e540e is in state STARTED 2025-05-13 23:47:01.025036 | orchestrator | 2025-05-13 23:47:01 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:47:04.073450 | orchestrator | 2025-05-13 23:47:04 | INFO  | Task e8fc9090-5017-4c00-8cd9-365226f8f094 is in state STARTED 2025-05-13 23:47:04.073545 | orchestrator | 2025-05-13 23:47:04 | INFO  | Task b8186ed4-a416-4fa4-8b1e-cccf6f2ea0b1 is in state STARTED 2025-05-13 23:47:04.075105 | orchestrator | 2025-05-13 23:47:04 | INFO  | Task a01f234b-89bf-4c39-a3f1-7657586e540e is in state STARTED 2025-05-13 23:47:04.075151 | orchestrator | 2025-05-13 23:47:04 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:47:07.141259 | orchestrator | 2025-05-13 23:47:07 | INFO  | Task e8fc9090-5017-4c00-8cd9-365226f8f094 is in state STARTED 2025-05-13 23:47:07.141350 | orchestrator | 2025-05-13 23:47:07 | INFO  | Task e158c25f-f200-497d-9415-870201673bb8 is in state STARTED 2025-05-13 23:47:07.142070 | orchestrator | 2025-05-13 23:47:07 | INFO  | Task b8186ed4-a416-4fa4-8b1e-cccf6f2ea0b1 is in state STARTED 2025-05-13 23:47:07.144359 | orchestrator | 2025-05-13 23:47:07.144434 | orchestrator | 2025-05-13 23:47:07.144449 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-05-13 23:47:07.144461 | orchestrator | 2025-05-13 23:47:07.144472 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2025-05-13 23:47:07.144483 | orchestrator | Tuesday 13 May 2025 23:45:48 +0000 (0:00:00.152) 0:00:00.152 *********** 2025-05-13 23:47:07.144495 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-05-13 23:47:07.144507 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-05-13 23:47:07.144518 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-05-13 23:47:07.144529 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-05-13 23:47:07.144539 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-05-13 23:47:07.144550 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-05-13 23:47:07.144621 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-05-13 23:47:07.144632 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-05-13 23:47:07.144643 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-05-13 23:47:07.144654 | orchestrator | 2025-05-13 23:47:07.144665 | orchestrator | TASK [Create share directory] ************************************************** 2025-05-13 23:47:07.144676 | orchestrator | Tuesday 13 May 2025 23:45:52 +0000 (0:00:04.174) 0:00:04.327 *********** 2025-05-13 23:47:07.144687 | orchestrator | changed: [testbed-manager -> localhost] 2025-05-13 23:47:07.144698 | orchestrator | 2025-05-13 23:47:07.144709 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2025-05-13 23:47:07.144748 | orchestrator | Tuesday 13 May 2025 23:45:53 +0000 (0:00:01.008) 0:00:05.335 *********** 2025-05-13 23:47:07.144760 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-05-13 23:47:07.144771 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-13 23:47:07.144782 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-13 23:47:07.144793 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-05-13 23:47:07.144804 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-13 23:47:07.144815 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-05-13 23:47:07.144840 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-05-13 23:47:07.144851 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-05-13 23:47:07.144862 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-05-13 23:47:07.144919 | orchestrator | 2025-05-13 23:47:07.144932 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2025-05-13 23:47:07.144945 | orchestrator | Tuesday 13 May 2025 23:46:07 +0000 (0:00:13.356) 0:00:18.692 *********** 2025-05-13 23:47:07.144958 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2025-05-13 23:47:07.144971 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-05-13 23:47:07.144982 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-05-13 23:47:07.144995 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2025-05-13 23:47:07.145007 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-05-13 23:47:07.145020 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2025-05-13 23:47:07.145032 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2025-05-13 23:47:07.145045 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2025-05-13 23:47:07.145057 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2025-05-13 23:47:07.145069 | orchestrator | 2025-05-13 23:47:07.145082 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 23:47:07.145094 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 23:47:07.145108 | orchestrator | 2025-05-13 23:47:07.145119 | orchestrator | 2025-05-13 23:47:07.145130 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 23:47:07.145140 | orchestrator | Tuesday 13 May 2025 23:46:13 +0000 (0:00:06.354) 0:00:25.046 *********** 2025-05-13 23:47:07.145151 | orchestrator | =============================================================================== 2025-05-13 23:47:07.145161 | orchestrator | Write ceph keys to the share directory --------------------------------- 13.36s 2025-05-13 23:47:07.145172 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.35s 2025-05-13 23:47:07.145182 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.17s 2025-05-13 23:47:07.145193 | orchestrator | Create share directory -------------------------------------------------- 1.01s 2025-05-13 23:47:07.145203 | orchestrator | 2025-05-13 23:47:07.145214 | orchestrator | 2025-05-13 23:47:07.145224 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-05-13 23:47:07.145235 | orchestrator | 2025-05-13 23:47:07.145263 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-05-13 23:47:07.145275 | orchestrator | Tuesday 13 May 2025 23:46:17 +0000 (0:00:00.172) 0:00:00.172 *********** 2025-05-13 23:47:07.145285 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-05-13 23:47:07.145307 | orchestrator | 2025-05-13 23:47:07.145318 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-05-13 23:47:07.145329 | orchestrator | Tuesday 13 May 2025 23:46:18 +0000 (0:00:00.165) 0:00:00.338 *********** 2025-05-13 23:47:07.145339 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-05-13 23:47:07.145350 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-05-13 23:47:07.145360 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-05-13 23:47:07.145371 | orchestrator | 2025-05-13 23:47:07.145382 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-05-13 23:47:07.145392 | orchestrator | Tuesday 13 May 2025 23:46:19 +0000 (0:00:00.965) 0:00:01.304 *********** 2025-05-13 23:47:07.145403 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-05-13 23:47:07.145414 | orchestrator | 2025-05-13 23:47:07.145424 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-05-13 23:47:07.145435 | orchestrator | Tuesday 13 May 2025 23:46:19 +0000 (0:00:00.859) 0:00:02.163 *********** 2025-05-13 23:47:07.145446 | orchestrator | changed: [testbed-manager] 2025-05-13 23:47:07.145458 | orchestrator | 2025-05-13 23:47:07.145469 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-05-13 23:47:07.145479 | orchestrator | Tuesday 13 May 2025 23:46:20 +0000 (0:00:00.840) 0:00:03.004 *********** 2025-05-13 23:47:07.145490 | orchestrator | changed: [testbed-manager] 2025-05-13 23:47:07.145501 | orchestrator | 2025-05-13 23:47:07.145511 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-05-13 23:47:07.145522 | orchestrator | Tuesday 13 May 2025 23:46:21 +0000 (0:00:00.667) 0:00:03.671 *********** 2025-05-13 23:47:07.145532 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-05-13 23:47:07.145543 | orchestrator | ok: [testbed-manager] 2025-05-13 23:47:07.145575 | orchestrator | 2025-05-13 23:47:07.145587 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-05-13 23:47:07.145597 | orchestrator | Tuesday 13 May 2025 23:46:55 +0000 (0:00:33.963) 0:00:37.634 *********** 2025-05-13 23:47:07.145608 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-05-13 23:47:07.145619 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-05-13 23:47:07.145629 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-05-13 23:47:07.145640 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-05-13 23:47:07.145651 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-05-13 23:47:07.145661 | orchestrator | 2025-05-13 23:47:07.145672 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-05-13 23:47:07.145695 | orchestrator | Tuesday 13 May 2025 23:46:59 +0000 (0:00:03.729) 0:00:41.364 *********** 2025-05-13 23:47:07.145706 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-05-13 23:47:07.145717 | orchestrator | 2025-05-13 23:47:07.145727 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-05-13 23:47:07.145738 | orchestrator | Tuesday 13 May 2025 23:46:59 +0000 (0:00:00.433) 0:00:41.798 *********** 2025-05-13 23:47:07.145748 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:47:07.145759 | orchestrator | 2025-05-13 23:47:07.145769 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-05-13 23:47:07.145780 | orchestrator | Tuesday 13 May 2025 23:46:59 +0000 (0:00:00.124) 0:00:41.922 *********** 2025-05-13 23:47:07.145790 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:47:07.145801 | orchestrator | 2025-05-13 23:47:07.145811 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-05-13 23:47:07.145822 | orchestrator | Tuesday 13 May 2025 23:47:00 +0000 (0:00:00.347) 0:00:42.269 *********** 2025-05-13 23:47:07.145833 | orchestrator | changed: [testbed-manager] 2025-05-13 23:47:07.145844 | orchestrator | 2025-05-13 23:47:07.145854 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-05-13 23:47:07.145917 | orchestrator | Tuesday 13 May 2025 23:47:01 +0000 (0:00:01.640) 0:00:43.910 *********** 2025-05-13 23:47:07.145930 | orchestrator | changed: [testbed-manager] 2025-05-13 23:47:07.145941 | orchestrator | 2025-05-13 23:47:07.145951 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-05-13 23:47:07.145962 | orchestrator | Tuesday 13 May 2025 23:47:02 +0000 (0:00:00.714) 0:00:44.625 *********** 2025-05-13 23:47:07.145973 | orchestrator | changed: [testbed-manager] 2025-05-13 23:47:07.145983 | orchestrator | 2025-05-13 23:47:07.145994 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-05-13 23:47:07.146005 | orchestrator | Tuesday 13 May 2025 23:47:03 +0000 (0:00:00.571) 0:00:45.196 *********** 2025-05-13 23:47:07.146095 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-05-13 23:47:07.146111 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-05-13 23:47:07.146122 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-05-13 23:47:07.146133 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-05-13 23:47:07.146144 | orchestrator | 2025-05-13 23:47:07.146154 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 23:47:07.146165 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-13 23:47:07.146177 | orchestrator | 2025-05-13 23:47:07.146187 | orchestrator | 2025-05-13 23:47:07.146198 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 23:47:07.146209 | orchestrator | Tuesday 13 May 2025 23:47:04 +0000 (0:00:01.482) 0:00:46.679 *********** 2025-05-13 23:47:07.146230 | orchestrator | =============================================================================== 2025-05-13 23:47:07.146241 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 33.96s 2025-05-13 23:47:07.146252 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 3.73s 2025-05-13 23:47:07.146263 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.64s 2025-05-13 23:47:07.146273 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.48s 2025-05-13 23:47:07.146284 | orchestrator | osism.services.cephclient : Create required directories ----------------- 0.97s 2025-05-13 23:47:07.146295 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 0.86s 2025-05-13 23:47:07.146305 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.84s 2025-05-13 23:47:07.146316 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.72s 2025-05-13 23:47:07.146327 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.67s 2025-05-13 23:47:07.146337 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.57s 2025-05-13 23:47:07.146348 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.43s 2025-05-13 23:47:07.146358 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.35s 2025-05-13 23:47:07.146369 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.17s 2025-05-13 23:47:07.146380 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.12s 2025-05-13 23:47:07.146391 | orchestrator | 2025-05-13 23:47:07 | INFO  | Task a01f234b-89bf-4c39-a3f1-7657586e540e is in state SUCCESS 2025-05-13 23:47:07.146402 | orchestrator | 2025-05-13 23:47:07 | INFO  | Task 9499ae37-5b86-48b7-92cb-0b1b49901608 is in state STARTED 2025-05-13 23:47:07.146413 | orchestrator | 2025-05-13 23:47:07 | INFO  | Task 22f0ca57-d7f4-4505-9807-24bd1095caa3 is in state STARTED 2025-05-13 23:47:07.146424 | orchestrator | 2025-05-13 23:47:07 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:47:10.189943 | orchestrator | 2025-05-13 23:47:10 | INFO  | Task e8fc9090-5017-4c00-8cd9-365226f8f094 is in state STARTED 2025-05-13 23:47:10.340093 | orchestrator | 2025-05-13 23:47:10 | INFO  | Task e158c25f-f200-497d-9415-870201673bb8 is in state STARTED 2025-05-13 23:47:10.340234 | orchestrator | 2025-05-13 23:47:10 | INFO  | Task b8186ed4-a416-4fa4-8b1e-cccf6f2ea0b1 is in state STARTED 2025-05-13 23:47:10.340250 | orchestrator | 2025-05-13 23:47:10 | INFO  | Task 9499ae37-5b86-48b7-92cb-0b1b49901608 is in state STARTED 2025-05-13 23:47:10.340281 | orchestrator | 2025-05-13 23:47:10 | INFO  | Task 22f0ca57-d7f4-4505-9807-24bd1095caa3 is in state STARTED 2025-05-13 23:47:10.340293 | orchestrator | 2025-05-13 23:47:10 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:47:13.239666 | orchestrator | 2025-05-13 23:47:13 | INFO  | Task e8fc9090-5017-4c00-8cd9-365226f8f094 is in state STARTED 2025-05-13 23:47:13.241239 | orchestrator | 2025-05-13 23:47:13 | INFO  | Task e158c25f-f200-497d-9415-870201673bb8 is in state STARTED 2025-05-13 23:47:13.243746 | orchestrator | 2025-05-13 23:47:13 | INFO  | Task b8186ed4-a416-4fa4-8b1e-cccf6f2ea0b1 is in state STARTED 2025-05-13 23:47:13.246734 | orchestrator | 2025-05-13 23:47:13 | INFO  | Task 9499ae37-5b86-48b7-92cb-0b1b49901608 is in state STARTED 2025-05-13 23:47:13.247848 | orchestrator | 2025-05-13 23:47:13 | INFO  | Task 22f0ca57-d7f4-4505-9807-24bd1095caa3 is in state STARTED 2025-05-13 23:47:13.248744 | orchestrator | 2025-05-13 23:47:13 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:47:16.317778 | orchestrator | 2025-05-13 23:47:16 | INFO  | Task e8fc9090-5017-4c00-8cd9-365226f8f094 is in state STARTED 2025-05-13 23:47:16.321867 | orchestrator | 2025-05-13 23:47:16 | INFO  | Task e158c25f-f200-497d-9415-870201673bb8 is in state STARTED 2025-05-13 23:47:16.327161 | orchestrator | 2025-05-13 23:47:16 | INFO  | Task b8186ed4-a416-4fa4-8b1e-cccf6f2ea0b1 is in state STARTED 2025-05-13 23:47:16.332349 | orchestrator | 2025-05-13 23:47:16 | INFO  | Task 9499ae37-5b86-48b7-92cb-0b1b49901608 is in state STARTED 2025-05-13 23:47:16.336754 | orchestrator | 2025-05-13 23:47:16 | INFO  | Task 22f0ca57-d7f4-4505-9807-24bd1095caa3 is in state STARTED 2025-05-13 23:47:16.337376 | orchestrator | 2025-05-13 23:47:16 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:47:19.383406 | orchestrator | 2025-05-13 23:47:19 | INFO  | Task e8fc9090-5017-4c00-8cd9-365226f8f094 is in state STARTED 2025-05-13 23:47:19.383527 | orchestrator | 2025-05-13 23:47:19 | INFO  | Task e158c25f-f200-497d-9415-870201673bb8 is in state STARTED 2025-05-13 23:47:19.384276 | orchestrator | 2025-05-13 23:47:19 | INFO  | Task b8186ed4-a416-4fa4-8b1e-cccf6f2ea0b1 is in state STARTED 2025-05-13 23:47:19.385507 | orchestrator | 2025-05-13 23:47:19 | INFO  | Task 9499ae37-5b86-48b7-92cb-0b1b49901608 is in state STARTED 2025-05-13 23:47:19.387761 | orchestrator | 2025-05-13 23:47:19 | INFO  | Task 22f0ca57-d7f4-4505-9807-24bd1095caa3 is in state STARTED 2025-05-13 23:47:19.387793 | orchestrator | 2025-05-13 23:47:19 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:47:22.429863 | orchestrator | 2025-05-13 23:47:22 | INFO  | Task e8fc9090-5017-4c00-8cd9-365226f8f094 is in state STARTED 2025-05-13 23:47:22.430649 | orchestrator | 2025-05-13 23:47:22 | INFO  | Task e158c25f-f200-497d-9415-870201673bb8 is in state STARTED 2025-05-13 23:47:22.431712 | orchestrator | 2025-05-13 23:47:22 | INFO  | Task b8186ed4-a416-4fa4-8b1e-cccf6f2ea0b1 is in state STARTED 2025-05-13 23:47:22.432689 | orchestrator | 2025-05-13 23:47:22 | INFO  | Task 9499ae37-5b86-48b7-92cb-0b1b49901608 is in state STARTED 2025-05-13 23:47:22.433965 | orchestrator | 2025-05-13 23:47:22 | INFO  | Task 22f0ca57-d7f4-4505-9807-24bd1095caa3 is in state STARTED 2025-05-13 23:47:22.434054 | orchestrator | 2025-05-13 23:47:22 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:47:25.480029 | orchestrator | 2025-05-13 23:47:25 | INFO  | Task e8fc9090-5017-4c00-8cd9-365226f8f094 is in state STARTED 2025-05-13 23:47:25.480161 | orchestrator | 2025-05-13 23:47:25 | INFO  | Task e158c25f-f200-497d-9415-870201673bb8 is in state STARTED 2025-05-13 23:47:25.481592 | orchestrator | 2025-05-13 23:47:25 | INFO  | Task b8186ed4-a416-4fa4-8b1e-cccf6f2ea0b1 is in state STARTED 2025-05-13 23:47:25.482420 | orchestrator | 2025-05-13 23:47:25 | INFO  | Task 9499ae37-5b86-48b7-92cb-0b1b49901608 is in state STARTED 2025-05-13 23:47:25.483100 | orchestrator | 2025-05-13 23:47:25 | INFO  | Task 22f0ca57-d7f4-4505-9807-24bd1095caa3 is in state STARTED 2025-05-13 23:47:25.483154 | orchestrator | 2025-05-13 23:47:25 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:47:28.537386 | orchestrator | 2025-05-13 23:47:28 | INFO  | Task e8fc9090-5017-4c00-8cd9-365226f8f094 is in state STARTED 2025-05-13 23:47:28.537529 | orchestrator | 2025-05-13 23:47:28 | INFO  | Task e158c25f-f200-497d-9415-870201673bb8 is in state STARTED 2025-05-13 23:47:28.537601 | orchestrator | 2025-05-13 23:47:28 | INFO  | Task b8186ed4-a416-4fa4-8b1e-cccf6f2ea0b1 is in state STARTED 2025-05-13 23:47:28.537637 | orchestrator | 2025-05-13 23:47:28 | INFO  | Task 9499ae37-5b86-48b7-92cb-0b1b49901608 is in state STARTED 2025-05-13 23:47:28.538495 | orchestrator | 2025-05-13 23:47:28 | INFO  | Task 22f0ca57-d7f4-4505-9807-24bd1095caa3 is in state STARTED 2025-05-13 23:47:28.538525 | orchestrator | 2025-05-13 23:47:28 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:47:31.578288 | orchestrator | 2025-05-13 23:47:31 | INFO  | Task e8fc9090-5017-4c00-8cd9-365226f8f094 is in state STARTED 2025-05-13 23:47:31.580172 | orchestrator | 2025-05-13 23:47:31 | INFO  | Task e158c25f-f200-497d-9415-870201673bb8 is in state STARTED 2025-05-13 23:47:31.582114 | orchestrator | 2025-05-13 23:47:31 | INFO  | Task b8186ed4-a416-4fa4-8b1e-cccf6f2ea0b1 is in state STARTED 2025-05-13 23:47:31.583871 | orchestrator | 2025-05-13 23:47:31 | INFO  | Task 9499ae37-5b86-48b7-92cb-0b1b49901608 is in state STARTED 2025-05-13 23:47:31.585139 | orchestrator | 2025-05-13 23:47:31 | INFO  | Task 22f0ca57-d7f4-4505-9807-24bd1095caa3 is in state STARTED 2025-05-13 23:47:31.585372 | orchestrator | 2025-05-13 23:47:31 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:47:34.657803 | orchestrator | 2025-05-13 23:47:34 | INFO  | Task e8fc9090-5017-4c00-8cd9-365226f8f094 is in state STARTED 2025-05-13 23:47:34.658654 | orchestrator | 2025-05-13 23:47:34 | INFO  | Task e158c25f-f200-497d-9415-870201673bb8 is in state STARTED 2025-05-13 23:47:34.662623 | orchestrator | 2025-05-13 23:47:34 | INFO  | Task b8186ed4-a416-4fa4-8b1e-cccf6f2ea0b1 is in state STARTED 2025-05-13 23:47:34.670573 | orchestrator | 2025-05-13 23:47:34 | INFO  | Task 9499ae37-5b86-48b7-92cb-0b1b49901608 is in state STARTED 2025-05-13 23:47:34.670629 | orchestrator | 2025-05-13 23:47:34 | INFO  | Task 22f0ca57-d7f4-4505-9807-24bd1095caa3 is in state STARTED 2025-05-13 23:47:34.670635 | orchestrator | 2025-05-13 23:47:34 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:47:37.719064 | orchestrator | 2025-05-13 23:47:37 | INFO  | Task e8fc9090-5017-4c00-8cd9-365226f8f094 is in state STARTED 2025-05-13 23:47:37.719162 | orchestrator | 2025-05-13 23:47:37 | INFO  | Task e158c25f-f200-497d-9415-870201673bb8 is in state STARTED 2025-05-13 23:47:37.727049 | orchestrator | 2025-05-13 23:47:37 | INFO  | Task b8186ed4-a416-4fa4-8b1e-cccf6f2ea0b1 is in state STARTED 2025-05-13 23:47:37.734149 | orchestrator | 2025-05-13 23:47:37 | INFO  | Task 9499ae37-5b86-48b7-92cb-0b1b49901608 is in state STARTED 2025-05-13 23:47:37.736290 | orchestrator | 2025-05-13 23:47:37 | INFO  | Task 22f0ca57-d7f4-4505-9807-24bd1095caa3 is in state STARTED 2025-05-13 23:47:37.737138 | orchestrator | 2025-05-13 23:47:37 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:47:40.768280 | orchestrator | 2025-05-13 23:47:40 | INFO  | Task e8fc9090-5017-4c00-8cd9-365226f8f094 is in state STARTED 2025-05-13 23:47:40.769615 | orchestrator | 2025-05-13 23:47:40 | INFO  | Task e158c25f-f200-497d-9415-870201673bb8 is in state STARTED 2025-05-13 23:47:40.770105 | orchestrator | 2025-05-13 23:47:40 | INFO  | Task b8186ed4-a416-4fa4-8b1e-cccf6f2ea0b1 is in state STARTED 2025-05-13 23:47:40.772129 | orchestrator | 2025-05-13 23:47:40 | INFO  | Task 9499ae37-5b86-48b7-92cb-0b1b49901608 is in state STARTED 2025-05-13 23:47:40.772819 | orchestrator | 2025-05-13 23:47:40 | INFO  | Task 22f0ca57-d7f4-4505-9807-24bd1095caa3 is in state STARTED 2025-05-13 23:47:40.772845 | orchestrator | 2025-05-13 23:47:40 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:47:43.814379 | orchestrator | 2025-05-13 23:47:43 | INFO  | Task e8fc9090-5017-4c00-8cd9-365226f8f094 is in state STARTED 2025-05-13 23:47:43.814484 | orchestrator | 2025-05-13 23:47:43 | INFO  | Task e158c25f-f200-497d-9415-870201673bb8 is in state STARTED 2025-05-13 23:47:43.815089 | orchestrator | 2025-05-13 23:47:43 | INFO  | Task b8186ed4-a416-4fa4-8b1e-cccf6f2ea0b1 is in state STARTED 2025-05-13 23:47:43.815777 | orchestrator | 2025-05-13 23:47:43 | INFO  | Task 9499ae37-5b86-48b7-92cb-0b1b49901608 is in state STARTED 2025-05-13 23:47:43.816406 | orchestrator | 2025-05-13 23:47:43 | INFO  | Task 22f0ca57-d7f4-4505-9807-24bd1095caa3 is in state STARTED 2025-05-13 23:47:43.816437 | orchestrator | 2025-05-13 23:47:43 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:47:46.853688 | orchestrator | 2025-05-13 23:47:46 | INFO  | Task e8fc9090-5017-4c00-8cd9-365226f8f094 is in state STARTED 2025-05-13 23:47:46.854830 | orchestrator | 2025-05-13 23:47:46 | INFO  | Task e158c25f-f200-497d-9415-870201673bb8 is in state STARTED 2025-05-13 23:47:46.858390 | orchestrator | 2025-05-13 23:47:46 | INFO  | Task b8186ed4-a416-4fa4-8b1e-cccf6f2ea0b1 is in state STARTED 2025-05-13 23:47:46.864110 | orchestrator | 2025-05-13 23:47:46 | INFO  | Task 9499ae37-5b86-48b7-92cb-0b1b49901608 is in state STARTED 2025-05-13 23:47:46.865856 | orchestrator | 2025-05-13 23:47:46 | INFO  | Task 22f0ca57-d7f4-4505-9807-24bd1095caa3 is in state STARTED 2025-05-13 23:47:46.866109 | orchestrator | 2025-05-13 23:47:46 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:47:49.922144 | orchestrator | 2025-05-13 23:47:49 | INFO  | Task e8fc9090-5017-4c00-8cd9-365226f8f094 is in state STARTED 2025-05-13 23:47:49.922824 | orchestrator | 2025-05-13 23:47:49 | INFO  | Task e158c25f-f200-497d-9415-870201673bb8 is in state STARTED 2025-05-13 23:47:49.925612 | orchestrator | 2025-05-13 23:47:49 | INFO  | Task b8186ed4-a416-4fa4-8b1e-cccf6f2ea0b1 is in state SUCCESS 2025-05-13 23:47:49.927109 | orchestrator | 2025-05-13 23:47:49.927231 | orchestrator | 2025-05-13 23:47:49.927246 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-13 23:47:49.927258 | orchestrator | 2025-05-13 23:47:49.927269 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-13 23:47:49.927280 | orchestrator | Tuesday 13 May 2025 23:45:55 +0000 (0:00:00.292) 0:00:00.292 *********** 2025-05-13 23:47:49.927291 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:47:49.927304 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:47:49.927315 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:47:49.927325 | orchestrator | 2025-05-13 23:47:49.927337 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-13 23:47:49.927406 | orchestrator | Tuesday 13 May 2025 23:45:55 +0000 (0:00:00.309) 0:00:00.601 *********** 2025-05-13 23:47:49.927419 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-05-13 23:47:49.927431 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-05-13 23:47:49.927441 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-05-13 23:47:49.927452 | orchestrator | 2025-05-13 23:47:49.927463 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-05-13 23:47:49.927473 | orchestrator | 2025-05-13 23:47:49.927484 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-13 23:47:49.927494 | orchestrator | Tuesday 13 May 2025 23:45:56 +0000 (0:00:00.376) 0:00:00.977 *********** 2025-05-13 23:47:49.927505 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:47:49.927517 | orchestrator | 2025-05-13 23:47:49.927527 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-05-13 23:47:49.927538 | orchestrator | Tuesday 13 May 2025 23:45:56 +0000 (0:00:00.453) 0:00:01.431 *********** 2025-05-13 23:47:49.927583 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-13 23:47:49.927631 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-13 23:47:49.927662 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-13 23:47:49.927676 | orchestrator | 2025-05-13 23:47:49.927687 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-05-13 23:47:49.927700 | orchestrator | Tuesday 13 May 2025 23:45:57 +0000 (0:00:01.281) 0:00:02.713 *********** 2025-05-13 23:47:49.927713 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:47:49.927725 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:47:49.927737 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:47:49.927758 | orchestrator | 2025-05-13 23:47:49.927770 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-13 23:47:49.927781 | orchestrator | Tuesday 13 May 2025 23:45:58 +0000 (0:00:00.428) 0:00:03.141 *********** 2025-05-13 23:47:49.927794 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-05-13 23:47:49.927813 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2025-05-13 23:47:49.927825 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-05-13 23:47:49.927838 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-05-13 23:47:49.927850 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-05-13 23:47:49.927862 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-05-13 23:47:49.927874 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-05-13 23:47:49.927887 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-05-13 23:47:49.927900 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-05-13 23:47:49.927912 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2025-05-13 23:47:49.927924 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-05-13 23:47:49.927936 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-05-13 23:47:49.927948 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-05-13 23:47:49.927959 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-05-13 23:47:49.927971 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-05-13 23:47:49.927983 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-05-13 23:47:49.927995 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-05-13 23:47:49.928007 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2025-05-13 23:47:49.928019 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-05-13 23:47:49.928031 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-05-13 23:47:49.928042 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-05-13 23:47:49.928054 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-05-13 23:47:49.928066 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-05-13 23:47:49.928080 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-05-13 23:47:49.928092 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-05-13 23:47:49.928104 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-05-13 23:47:49.928115 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-05-13 23:47:49.928125 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-05-13 23:47:49.928136 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-05-13 23:47:49.928146 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-05-13 23:47:49.928165 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-05-13 23:47:49.928176 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-05-13 23:47:49.928191 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-05-13 23:47:49.928203 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-05-13 23:47:49.928214 | orchestrator | 2025-05-13 23:47:49.928224 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-13 23:47:49.928235 | orchestrator | Tuesday 13 May 2025 23:45:58 +0000 (0:00:00.665) 0:00:03.807 *********** 2025-05-13 23:47:49.928246 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:47:49.928256 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:47:49.928267 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:47:49.928277 | orchestrator | 2025-05-13 23:47:49.928288 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-13 23:47:49.928299 | orchestrator | Tuesday 13 May 2025 23:45:59 +0000 (0:00:00.312) 0:00:04.119 *********** 2025-05-13 23:47:49.928309 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:47:49.928320 | orchestrator | 2025-05-13 23:47:49.928335 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-13 23:47:49.928346 | orchestrator | Tuesday 13 May 2025 23:45:59 +0000 (0:00:00.128) 0:00:04.248 *********** 2025-05-13 23:47:49.928357 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:47:49.928367 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:47:49.928378 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:47:49.928389 | orchestrator | 2025-05-13 23:47:49.928399 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-13 23:47:49.928410 | orchestrator | Tuesday 13 May 2025 23:45:59 +0000 (0:00:00.463) 0:00:04.711 *********** 2025-05-13 23:47:49.928420 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:47:49.928431 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:47:49.928442 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:47:49.928452 | orchestrator | 2025-05-13 23:47:49.928463 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-13 23:47:49.928473 | orchestrator | Tuesday 13 May 2025 23:46:00 +0000 (0:00:00.310) 0:00:05.022 *********** 2025-05-13 23:47:49.928484 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:47:49.928494 | orchestrator | 2025-05-13 23:47:49.928505 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-13 23:47:49.928516 | orchestrator | Tuesday 13 May 2025 23:46:00 +0000 (0:00:00.125) 0:00:05.148 *********** 2025-05-13 23:47:49.928527 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:47:49.928537 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:47:49.928573 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:47:49.928584 | orchestrator | 2025-05-13 23:47:49.928595 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-13 23:47:49.928606 | orchestrator | Tuesday 13 May 2025 23:46:00 +0000 (0:00:00.303) 0:00:05.452 *********** 2025-05-13 23:47:49.928616 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:47:49.928627 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:47:49.928638 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:47:49.928648 | orchestrator | 2025-05-13 23:47:49.928659 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-13 23:47:49.928670 | orchestrator | Tuesday 13 May 2025 23:46:00 +0000 (0:00:00.300) 0:00:05.752 *********** 2025-05-13 23:47:49.928681 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:47:49.928691 | orchestrator | 2025-05-13 23:47:49.928702 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-13 23:47:49.928721 | orchestrator | Tuesday 13 May 2025 23:46:01 +0000 (0:00:00.379) 0:00:06.132 *********** 2025-05-13 23:47:49.928732 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:47:49.928742 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:47:49.928753 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:47:49.928764 | orchestrator | 2025-05-13 23:47:49.928775 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-13 23:47:49.928785 | orchestrator | Tuesday 13 May 2025 23:46:01 +0000 (0:00:00.382) 0:00:06.514 *********** 2025-05-13 23:47:49.928796 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:47:49.928806 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:47:49.928817 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:47:49.928828 | orchestrator | 2025-05-13 23:47:49.928838 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-13 23:47:49.928849 | orchestrator | Tuesday 13 May 2025 23:46:01 +0000 (0:00:00.376) 0:00:06.891 *********** 2025-05-13 23:47:49.928860 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:47:49.928870 | orchestrator | 2025-05-13 23:47:49.928881 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-13 23:47:49.928892 | orchestrator | Tuesday 13 May 2025 23:46:02 +0000 (0:00:00.148) 0:00:07.039 *********** 2025-05-13 23:47:49.928902 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:47:49.928937 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:47:49.928949 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:47:49.928960 | orchestrator | 2025-05-13 23:47:49.928970 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-13 23:47:49.928993 | orchestrator | Tuesday 13 May 2025 23:46:02 +0000 (0:00:00.280) 0:00:07.320 *********** 2025-05-13 23:47:49.929004 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:47:49.929015 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:47:49.929025 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:47:49.929035 | orchestrator | 2025-05-13 23:47:49.929046 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-13 23:47:49.929057 | orchestrator | Tuesday 13 May 2025 23:46:02 +0000 (0:00:00.582) 0:00:07.903 *********** 2025-05-13 23:47:49.929068 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:47:49.929079 | orchestrator | 2025-05-13 23:47:49.929090 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-13 23:47:49.929100 | orchestrator | Tuesday 13 May 2025 23:46:03 +0000 (0:00:00.140) 0:00:08.043 *********** 2025-05-13 23:47:49.929111 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:47:49.929122 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:47:49.929132 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:47:49.929143 | orchestrator | 2025-05-13 23:47:49.929154 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-13 23:47:49.929164 | orchestrator | Tuesday 13 May 2025 23:46:03 +0000 (0:00:00.293) 0:00:08.336 *********** 2025-05-13 23:47:49.929175 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:47:49.929192 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:47:49.929202 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:47:49.929213 | orchestrator | 2025-05-13 23:47:49.929224 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-13 23:47:49.929234 | orchestrator | Tuesday 13 May 2025 23:46:03 +0000 (0:00:00.357) 0:00:08.694 *********** 2025-05-13 23:47:49.929245 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:47:49.929256 | orchestrator | 2025-05-13 23:47:49.929266 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-13 23:47:49.929277 | orchestrator | Tuesday 13 May 2025 23:46:03 +0000 (0:00:00.117) 0:00:08.812 *********** 2025-05-13 23:47:49.929288 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:47:49.929298 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:47:49.929309 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:47:49.929320 | orchestrator | 2025-05-13 23:47:49.929331 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-13 23:47:49.929357 | orchestrator | Tuesday 13 May 2025 23:46:04 +0000 (0:00:00.521) 0:00:09.334 *********** 2025-05-13 23:47:49.929368 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:47:49.929379 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:47:49.929390 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:47:49.929400 | orchestrator | 2025-05-13 23:47:49.929411 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-13 23:47:49.929421 | orchestrator | Tuesday 13 May 2025 23:46:04 +0000 (0:00:00.353) 0:00:09.687 *********** 2025-05-13 23:47:49.929432 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:47:49.929443 | orchestrator | 2025-05-13 23:47:49.929453 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-13 23:47:49.929464 | orchestrator | Tuesday 13 May 2025 23:46:04 +0000 (0:00:00.204) 0:00:09.891 *********** 2025-05-13 23:47:49.929475 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:47:49.929485 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:47:49.929496 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:47:49.929507 | orchestrator | 2025-05-13 23:47:49.929518 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-13 23:47:49.929528 | orchestrator | Tuesday 13 May 2025 23:46:05 +0000 (0:00:00.323) 0:00:10.215 *********** 2025-05-13 23:47:49.929539 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:47:49.929581 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:47:49.929592 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:47:49.929603 | orchestrator | 2025-05-13 23:47:49.929614 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-13 23:47:49.929624 | orchestrator | Tuesday 13 May 2025 23:46:05 +0000 (0:00:00.287) 0:00:10.502 *********** 2025-05-13 23:47:49.929635 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:47:49.929645 | orchestrator | 2025-05-13 23:47:49.929656 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-13 23:47:49.929666 | orchestrator | Tuesday 13 May 2025 23:46:05 +0000 (0:00:00.124) 0:00:10.626 *********** 2025-05-13 23:47:49.929677 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:47:49.929688 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:47:49.929698 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:47:49.929709 | orchestrator | 2025-05-13 23:47:49.929719 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-13 23:47:49.929730 | orchestrator | Tuesday 13 May 2025 23:46:06 +0000 (0:00:00.499) 0:00:11.126 *********** 2025-05-13 23:47:49.929741 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:47:49.929751 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:47:49.929762 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:47:49.929773 | orchestrator | 2025-05-13 23:47:49.929783 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-13 23:47:49.929794 | orchestrator | Tuesday 13 May 2025 23:46:06 +0000 (0:00:00.355) 0:00:11.482 *********** 2025-05-13 23:47:49.929804 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:47:49.929815 | orchestrator | 2025-05-13 23:47:49.929826 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-13 23:47:49.929837 | orchestrator | Tuesday 13 May 2025 23:46:06 +0000 (0:00:00.125) 0:00:11.607 *********** 2025-05-13 23:47:49.929847 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:47:49.929858 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:47:49.929868 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:47:49.929879 | orchestrator | 2025-05-13 23:47:49.929890 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-13 23:47:49.929900 | orchestrator | Tuesday 13 May 2025 23:46:06 +0000 (0:00:00.308) 0:00:11.916 *********** 2025-05-13 23:47:49.929911 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:47:49.929921 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:47:49.929932 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:47:49.929942 | orchestrator | 2025-05-13 23:47:49.929953 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-13 23:47:49.929964 | orchestrator | Tuesday 13 May 2025 23:46:07 +0000 (0:00:00.559) 0:00:12.476 *********** 2025-05-13 23:47:49.929982 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:47:49.929993 | orchestrator | 2025-05-13 23:47:49.930004 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-13 23:47:49.930015 | orchestrator | Tuesday 13 May 2025 23:46:07 +0000 (0:00:00.128) 0:00:12.604 *********** 2025-05-13 23:47:49.930077 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:47:49.930088 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:47:49.930098 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:47:49.930109 | orchestrator | 2025-05-13 23:47:49.930120 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-05-13 23:47:49.930130 | orchestrator | Tuesday 13 May 2025 23:46:07 +0000 (0:00:00.296) 0:00:12.901 *********** 2025-05-13 23:47:49.930141 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:47:49.930151 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:47:49.930162 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:47:49.930173 | orchestrator | 2025-05-13 23:47:49.930183 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-05-13 23:47:49.930194 | orchestrator | Tuesday 13 May 2025 23:46:09 +0000 (0:00:01.680) 0:00:14.581 *********** 2025-05-13 23:47:49.930205 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-05-13 23:47:49.930221 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-05-13 23:47:49.930232 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-05-13 23:47:49.930243 | orchestrator | 2025-05-13 23:47:49.930254 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-05-13 23:47:49.930264 | orchestrator | Tuesday 13 May 2025 23:46:11 +0000 (0:00:02.290) 0:00:16.872 *********** 2025-05-13 23:47:49.930275 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-05-13 23:47:49.930286 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-05-13 23:47:49.930296 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-05-13 23:47:49.930307 | orchestrator | 2025-05-13 23:47:49.930318 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-05-13 23:47:49.930336 | orchestrator | Tuesday 13 May 2025 23:46:14 +0000 (0:00:02.498) 0:00:19.371 *********** 2025-05-13 23:47:49.930347 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-05-13 23:47:49.930358 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-05-13 23:47:49.930368 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-05-13 23:47:49.930379 | orchestrator | 2025-05-13 23:47:49.930390 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-05-13 23:47:49.930400 | orchestrator | Tuesday 13 May 2025 23:46:16 +0000 (0:00:01.690) 0:00:21.062 *********** 2025-05-13 23:47:49.930411 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:47:49.930421 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:47:49.930432 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:47:49.930443 | orchestrator | 2025-05-13 23:47:49.930453 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-05-13 23:47:49.930464 | orchestrator | Tuesday 13 May 2025 23:46:16 +0000 (0:00:00.324) 0:00:21.386 *********** 2025-05-13 23:47:49.930474 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:47:49.930485 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:47:49.930496 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:47:49.930507 | orchestrator | 2025-05-13 23:47:49.930517 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-13 23:47:49.930528 | orchestrator | Tuesday 13 May 2025 23:46:16 +0000 (0:00:00.295) 0:00:21.682 *********** 2025-05-13 23:47:49.930729 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:47:49.930767 | orchestrator | 2025-05-13 23:47:49.930778 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-05-13 23:47:49.930789 | orchestrator | Tuesday 13 May 2025 23:46:17 +0000 (0:00:00.896) 0:00:22.578 *********** 2025-05-13 23:47:49.930812 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-13 23:47:49.930845 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-13 23:47:49.930880 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-13 23:47:49.930893 | orchestrator | 2025-05-13 23:47:49.930904 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-05-13 23:47:49.930915 | orchestrator | Tuesday 13 May 2025 23:46:19 +0000 (0:00:01.480) 0:00:24.059 *********** 2025-05-13 23:47:49.930936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-13 23:47:49.930955 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:47:49.930979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-13 23:47:49.930992 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:47:49.931004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-13 23:47:49.931023 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:47:49.931033 | orchestrator | 2025-05-13 23:47:49.931044 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-05-13 23:47:49.931055 | orchestrator | Tuesday 13 May 2025 23:46:19 +0000 (0:00:00.530) 0:00:24.589 *********** 2025-05-13 23:47:49.931080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-13 23:47:49.931100 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:47:49.931111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-13 23:47:49.931122 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:47:49.931145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-13 23:47:49.931163 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:47:49.931173 | orchestrator | 2025-05-13 23:47:49.931182 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-05-13 23:47:49.931192 | orchestrator | Tuesday 13 May 2025 23:46:20 +0000 (0:00:00.952) 0:00:25.541 *********** 2025-05-13 23:47:49.931202 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-13 23:47:49.931236 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-13 23:47:49.931260 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-13 23:47:49.931271 | orchestrator | 2025-05-13 23:47:49.931281 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-13 23:47:49.931291 | orchestrator | Tuesday 13 May 2025 23:46:21 +0000 (0:00:01.219) 0:00:26.760 *********** 2025-05-13 23:47:49.931300 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:47:49.931310 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:47:49.931319 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:47:49.931329 | orchestrator | 2025-05-13 23:47:49.931346 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-13 23:47:49.931357 | orchestrator | Tuesday 13 May 2025 23:46:22 +0000 (0:00:00.279) 0:00:27.040 *********** 2025-05-13 23:47:49.931366 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:47:49.931376 | orchestrator | 2025-05-13 23:47:49.931394 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-05-13 23:47:49.931416 | orchestrator | Tuesday 13 May 2025 23:46:22 +0000 (0:00:00.604) 0:00:27.645 *********** 2025-05-13 23:47:49.931427 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:47:49.931437 | orchestrator | 2025-05-13 23:47:49.931447 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-05-13 23:47:49.931464 | orchestrator | Tuesday 13 May 2025 23:46:24 +0000 (0:00:01.980) 0:00:29.626 *********** 2025-05-13 23:47:49.931474 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:47:49.931484 | orchestrator | 2025-05-13 23:47:49.931493 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-05-13 23:47:49.931503 | orchestrator | Tuesday 13 May 2025 23:46:26 +0000 (0:00:02.077) 0:00:31.704 *********** 2025-05-13 23:47:49.931512 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:47:49.931521 | orchestrator | 2025-05-13 23:47:49.931531 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-05-13 23:47:49.931540 | orchestrator | Tuesday 13 May 2025 23:46:41 +0000 (0:00:14.914) 0:00:46.618 *********** 2025-05-13 23:47:49.931577 | orchestrator | 2025-05-13 23:47:49.931586 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-05-13 23:47:49.931596 | orchestrator | Tuesday 13 May 2025 23:46:41 +0000 (0:00:00.064) 0:00:46.683 *********** 2025-05-13 23:47:49.931605 | orchestrator | 2025-05-13 23:47:49.931614 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-05-13 23:47:49.931623 | orchestrator | Tuesday 13 May 2025 23:46:41 +0000 (0:00:00.066) 0:00:46.750 *********** 2025-05-13 23:47:49.931633 | orchestrator | 2025-05-13 23:47:49.931642 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-05-13 23:47:49.931652 | orchestrator | Tuesday 13 May 2025 23:46:41 +0000 (0:00:00.065) 0:00:46.816 *********** 2025-05-13 23:47:49.931661 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:47:49.931670 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:47:49.931680 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:47:49.931689 | orchestrator | 2025-05-13 23:47:49.931698 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 23:47:49.931708 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2025-05-13 23:47:49.931718 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-05-13 23:47:49.931727 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-05-13 23:47:49.931737 | orchestrator | 2025-05-13 23:47:49.931746 | orchestrator | 2025-05-13 23:47:49.931756 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 23:47:49.931765 | orchestrator | Tuesday 13 May 2025 23:47:47 +0000 (0:01:06.119) 0:01:52.935 *********** 2025-05-13 23:47:49.931774 | orchestrator | =============================================================================== 2025-05-13 23:47:49.931784 | orchestrator | horizon : Restart horizon container ------------------------------------ 66.12s 2025-05-13 23:47:49.931793 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 14.91s 2025-05-13 23:47:49.931803 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.50s 2025-05-13 23:47:49.931812 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.29s 2025-05-13 23:47:49.931822 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.08s 2025-05-13 23:47:49.931831 | orchestrator | horizon : Creating Horizon database ------------------------------------- 1.98s 2025-05-13 23:47:49.931840 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.69s 2025-05-13 23:47:49.931850 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.68s 2025-05-13 23:47:49.931870 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.48s 2025-05-13 23:47:49.931879 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.28s 2025-05-13 23:47:49.931889 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.22s 2025-05-13 23:47:49.931898 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.95s 2025-05-13 23:47:49.931908 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.90s 2025-05-13 23:47:49.931917 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.67s 2025-05-13 23:47:49.931926 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.60s 2025-05-13 23:47:49.931936 | orchestrator | horizon : Update policy file name --------------------------------------- 0.58s 2025-05-13 23:47:49.931951 | orchestrator | horizon : Update policy file name --------------------------------------- 0.56s 2025-05-13 23:47:49.931960 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.53s 2025-05-13 23:47:49.931970 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.52s 2025-05-13 23:47:49.931979 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.50s 2025-05-13 23:47:49.931988 | orchestrator | 2025-05-13 23:47:49 | INFO  | Task 9499ae37-5b86-48b7-92cb-0b1b49901608 is in state STARTED 2025-05-13 23:47:49.931998 | orchestrator | 2025-05-13 23:47:49 | INFO  | Task 22f0ca57-d7f4-4505-9807-24bd1095caa3 is in state STARTED 2025-05-13 23:47:49.932008 | orchestrator | 2025-05-13 23:47:49 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:47:52.983643 | orchestrator | 2025-05-13 23:47:52 | INFO  | Task e8fc9090-5017-4c00-8cd9-365226f8f094 is in state STARTED 2025-05-13 23:47:52.984097 | orchestrator | 2025-05-13 23:47:52 | INFO  | Task e158c25f-f200-497d-9415-870201673bb8 is in state STARTED 2025-05-13 23:47:52.985110 | orchestrator | 2025-05-13 23:47:52 | INFO  | Task 9499ae37-5b86-48b7-92cb-0b1b49901608 is in state STARTED 2025-05-13 23:47:52.985808 | orchestrator | 2025-05-13 23:47:52 | INFO  | Task 22f0ca57-d7f4-4505-9807-24bd1095caa3 is in state STARTED 2025-05-13 23:47:52.985832 | orchestrator | 2025-05-13 23:47:52 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:47:56.035987 | orchestrator | 2025-05-13 23:47:56 | INFO  | Task e8fc9090-5017-4c00-8cd9-365226f8f094 is in state STARTED 2025-05-13 23:47:56.036084 | orchestrator | 2025-05-13 23:47:56 | INFO  | Task e158c25f-f200-497d-9415-870201673bb8 is in state STARTED 2025-05-13 23:47:56.036108 | orchestrator | 2025-05-13 23:47:56 | INFO  | Task 9499ae37-5b86-48b7-92cb-0b1b49901608 is in state STARTED 2025-05-13 23:47:56.036127 | orchestrator | 2025-05-13 23:47:56 | INFO  | Task 22f0ca57-d7f4-4505-9807-24bd1095caa3 is in state STARTED 2025-05-13 23:47:56.036146 | orchestrator | 2025-05-13 23:47:56 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:47:59.072222 | orchestrator | 2025-05-13 23:47:59 | INFO  | Task e8fc9090-5017-4c00-8cd9-365226f8f094 is in state STARTED 2025-05-13 23:47:59.073629 | orchestrator | 2025-05-13 23:47:59 | INFO  | Task e158c25f-f200-497d-9415-870201673bb8 is in state STARTED 2025-05-13 23:47:59.074523 | orchestrator | 2025-05-13 23:47:59 | INFO  | Task 9499ae37-5b86-48b7-92cb-0b1b49901608 is in state STARTED 2025-05-13 23:47:59.075694 | orchestrator | 2025-05-13 23:47:59 | INFO  | Task 22f0ca57-d7f4-4505-9807-24bd1095caa3 is in state STARTED 2025-05-13 23:47:59.075788 | orchestrator | 2025-05-13 23:47:59 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:48:02.122981 | orchestrator | 2025-05-13 23:48:02 | INFO  | Task e8fc9090-5017-4c00-8cd9-365226f8f094 is in state STARTED 2025-05-13 23:48:02.123637 | orchestrator | 2025-05-13 23:48:02 | INFO  | Task e158c25f-f200-497d-9415-870201673bb8 is in state STARTED 2025-05-13 23:48:02.124649 | orchestrator | 2025-05-13 23:48:02 | INFO  | Task 9499ae37-5b86-48b7-92cb-0b1b49901608 is in state STARTED 2025-05-13 23:48:02.125900 | orchestrator | 2025-05-13 23:48:02 | INFO  | Task 22f0ca57-d7f4-4505-9807-24bd1095caa3 is in state STARTED 2025-05-13 23:48:02.125929 | orchestrator | 2025-05-13 23:48:02 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:48:05.172825 | orchestrator | 2025-05-13 23:48:05 | INFO  | Task e8fc9090-5017-4c00-8cd9-365226f8f094 is in state STARTED 2025-05-13 23:48:05.173698 | orchestrator | 2025-05-13 23:48:05 | INFO  | Task e158c25f-f200-497d-9415-870201673bb8 is in state STARTED 2025-05-13 23:48:05.175301 | orchestrator | 2025-05-13 23:48:05 | INFO  | Task 9499ae37-5b86-48b7-92cb-0b1b49901608 is in state STARTED 2025-05-13 23:48:05.180055 | orchestrator | 2025-05-13 23:48:05 | INFO  | Task 22f0ca57-d7f4-4505-9807-24bd1095caa3 is in state STARTED 2025-05-13 23:48:05.183191 | orchestrator | 2025-05-13 23:48:05 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:48:08.231827 | orchestrator | 2025-05-13 23:48:08 | INFO  | Task e8fc9090-5017-4c00-8cd9-365226f8f094 is in state STARTED 2025-05-13 23:48:08.231901 | orchestrator | 2025-05-13 23:48:08 | INFO  | Task e158c25f-f200-497d-9415-870201673bb8 is in state STARTED 2025-05-13 23:48:08.234656 | orchestrator | 2025-05-13 23:48:08 | INFO  | Task 9499ae37-5b86-48b7-92cb-0b1b49901608 is in state STARTED 2025-05-13 23:48:08.235464 | orchestrator | 2025-05-13 23:48:08 | INFO  | Task 22f0ca57-d7f4-4505-9807-24bd1095caa3 is in state STARTED 2025-05-13 23:48:08.235487 | orchestrator | 2025-05-13 23:48:08 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:48:11.292885 | orchestrator | 2025-05-13 23:48:11 | INFO  | Task e8fc9090-5017-4c00-8cd9-365226f8f094 is in state STARTED 2025-05-13 23:48:11.294107 | orchestrator | 2025-05-13 23:48:11 | INFO  | Task e158c25f-f200-497d-9415-870201673bb8 is in state STARTED 2025-05-13 23:48:11.296105 | orchestrator | 2025-05-13 23:48:11 | INFO  | Task 9499ae37-5b86-48b7-92cb-0b1b49901608 is in state STARTED 2025-05-13 23:48:11.297905 | orchestrator | 2025-05-13 23:48:11 | INFO  | Task 22f0ca57-d7f4-4505-9807-24bd1095caa3 is in state STARTED 2025-05-13 23:48:11.298074 | orchestrator | 2025-05-13 23:48:11 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:48:14.336792 | orchestrator | 2025-05-13 23:48:14 | INFO  | Task e8fc9090-5017-4c00-8cd9-365226f8f094 is in state STARTED 2025-05-13 23:48:14.337084 | orchestrator | 2025-05-13 23:48:14 | INFO  | Task e158c25f-f200-497d-9415-870201673bb8 is in state STARTED 2025-05-13 23:48:14.337848 | orchestrator | 2025-05-13 23:48:14 | INFO  | Task 9499ae37-5b86-48b7-92cb-0b1b49901608 is in state STARTED 2025-05-13 23:48:14.338645 | orchestrator | 2025-05-13 23:48:14 | INFO  | Task 22f0ca57-d7f4-4505-9807-24bd1095caa3 is in state STARTED 2025-05-13 23:48:14.338913 | orchestrator | 2025-05-13 23:48:14 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:48:17.383935 | orchestrator | 2025-05-13 23:48:17 | INFO  | Task e8fc9090-5017-4c00-8cd9-365226f8f094 is in state STARTED 2025-05-13 23:48:17.386584 | orchestrator | 2025-05-13 23:48:17 | INFO  | Task e158c25f-f200-497d-9415-870201673bb8 is in state STARTED 2025-05-13 23:48:17.388810 | orchestrator | 2025-05-13 23:48:17 | INFO  | Task 9499ae37-5b86-48b7-92cb-0b1b49901608 is in state STARTED 2025-05-13 23:48:17.390330 | orchestrator | 2025-05-13 23:48:17 | INFO  | Task 22f0ca57-d7f4-4505-9807-24bd1095caa3 is in state STARTED 2025-05-13 23:48:17.390378 | orchestrator | 2025-05-13 23:48:17 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:48:20.434286 | orchestrator | 2025-05-13 23:48:20 | INFO  | Task e8fc9090-5017-4c00-8cd9-365226f8f094 is in state STARTED 2025-05-13 23:48:20.434415 | orchestrator | 2025-05-13 23:48:20 | INFO  | Task e158c25f-f200-497d-9415-870201673bb8 is in state STARTED 2025-05-13 23:48:20.434438 | orchestrator | 2025-05-13 23:48:20 | INFO  | Task 9499ae37-5b86-48b7-92cb-0b1b49901608 is in state STARTED 2025-05-13 23:48:20.434455 | orchestrator | 2025-05-13 23:48:20 | INFO  | Task 31c4c6fe-c6bf-49a6-89a1-cb843027ccb9 is in state STARTED 2025-05-13 23:48:20.436607 | orchestrator | 2025-05-13 23:48:20 | INFO  | Task 22f0ca57-d7f4-4505-9807-24bd1095caa3 is in state SUCCESS 2025-05-13 23:48:20.442631 | orchestrator | 2025-05-13 23:48:20 | INFO  | Task 0a2c0384-b5e6-40f8-80bb-cff2ea196bc8 is in state STARTED 2025-05-13 23:48:20.442698 | orchestrator | 2025-05-13 23:48:20 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:48:23.483006 | orchestrator | 2025-05-13 23:48:23 | INFO  | Task e8fc9090-5017-4c00-8cd9-365226f8f094 is in state STARTED 2025-05-13 23:48:23.483102 | orchestrator | 2025-05-13 23:48:23 | INFO  | Task e158c25f-f200-497d-9415-870201673bb8 is in state STARTED 2025-05-13 23:48:23.483853 | orchestrator | 2025-05-13 23:48:23 | INFO  | Task 9499ae37-5b86-48b7-92cb-0b1b49901608 is in state STARTED 2025-05-13 23:48:23.486261 | orchestrator | 2025-05-13 23:48:23 | INFO  | Task 31c4c6fe-c6bf-49a6-89a1-cb843027ccb9 is in state STARTED 2025-05-13 23:48:23.486776 | orchestrator | 2025-05-13 23:48:23 | INFO  | Task 0a2c0384-b5e6-40f8-80bb-cff2ea196bc8 is in state STARTED 2025-05-13 23:48:23.486801 | orchestrator | 2025-05-13 23:48:23 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:48:26.521800 | orchestrator | 2025-05-13 23:48:26 | INFO  | Task e8fc9090-5017-4c00-8cd9-365226f8f094 is in state STARTED 2025-05-13 23:48:26.521896 | orchestrator | 2025-05-13 23:48:26 | INFO  | Task e158c25f-f200-497d-9415-870201673bb8 is in state STARTED 2025-05-13 23:48:26.521910 | orchestrator | 2025-05-13 23:48:26 | INFO  | Task 9499ae37-5b86-48b7-92cb-0b1b49901608 is in state STARTED 2025-05-13 23:48:26.523375 | orchestrator | 2025-05-13 23:48:26 | INFO  | Task 31c4c6fe-c6bf-49a6-89a1-cb843027ccb9 is in state STARTED 2025-05-13 23:48:26.523869 | orchestrator | 2025-05-13 23:48:26 | INFO  | Task 0a2c0384-b5e6-40f8-80bb-cff2ea196bc8 is in state STARTED 2025-05-13 23:48:26.524067 | orchestrator | 2025-05-13 23:48:26 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:48:29.563516 | orchestrator | 2025-05-13 23:48:29 | INFO  | Task e8fc9090-5017-4c00-8cd9-365226f8f094 is in state STARTED 2025-05-13 23:48:29.566941 | orchestrator | 2025-05-13 23:48:29 | INFO  | Task e158c25f-f200-497d-9415-870201673bb8 is in state STARTED 2025-05-13 23:48:29.573193 | orchestrator | 2025-05-13 23:48:29 | INFO  | Task 9499ae37-5b86-48b7-92cb-0b1b49901608 is in state STARTED 2025-05-13 23:48:29.573267 | orchestrator | 2025-05-13 23:48:29 | INFO  | Task 31c4c6fe-c6bf-49a6-89a1-cb843027ccb9 is in state STARTED 2025-05-13 23:48:29.576607 | orchestrator | 2025-05-13 23:48:29 | INFO  | Task 0a2c0384-b5e6-40f8-80bb-cff2ea196bc8 is in state STARTED 2025-05-13 23:48:29.578173 | orchestrator | 2025-05-13 23:48:29 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:48:32.625657 | orchestrator | 2025-05-13 23:48:32 | INFO  | Task e8fc9090-5017-4c00-8cd9-365226f8f094 is in state STARTED 2025-05-13 23:48:32.628991 | orchestrator | 2025-05-13 23:48:32 | INFO  | Task e158c25f-f200-497d-9415-870201673bb8 is in state STARTED 2025-05-13 23:48:32.636116 | orchestrator | 2025-05-13 23:48:32 | INFO  | Task 9499ae37-5b86-48b7-92cb-0b1b49901608 is in state STARTED 2025-05-13 23:48:32.640895 | orchestrator | 2025-05-13 23:48:32 | INFO  | Task 31c4c6fe-c6bf-49a6-89a1-cb843027ccb9 is in state STARTED 2025-05-13 23:48:32.646453 | orchestrator | 2025-05-13 23:48:32 | INFO  | Task 0a2c0384-b5e6-40f8-80bb-cff2ea196bc8 is in state STARTED 2025-05-13 23:48:32.646751 | orchestrator | 2025-05-13 23:48:32 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:48:35.696901 | orchestrator | 2025-05-13 23:48:35 | INFO  | Task e8fc9090-5017-4c00-8cd9-365226f8f094 is in state STARTED 2025-05-13 23:48:35.698291 | orchestrator | 2025-05-13 23:48:35 | INFO  | Task e158c25f-f200-497d-9415-870201673bb8 is in state SUCCESS 2025-05-13 23:48:35.700650 | orchestrator | 2025-05-13 23:48:35 | INFO  | Task 9499ae37-5b86-48b7-92cb-0b1b49901608 is in state STARTED 2025-05-13 23:48:35.703028 | orchestrator | 2025-05-13 23:48:35 | INFO  | Task 31c4c6fe-c6bf-49a6-89a1-cb843027ccb9 is in state STARTED 2025-05-13 23:48:35.705146 | orchestrator | 2025-05-13 23:48:35 | INFO  | Task 0a2c0384-b5e6-40f8-80bb-cff2ea196bc8 is in state STARTED 2025-05-13 23:48:35.705216 | orchestrator | 2025-05-13 23:48:35 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:48:38.756772 | orchestrator | 2025-05-13 23:48:38 | INFO  | Task e8fc9090-5017-4c00-8cd9-365226f8f094 is in state STARTED 2025-05-13 23:48:38.757038 | orchestrator | 2025-05-13 23:48:38 | INFO  | Task 9499ae37-5b86-48b7-92cb-0b1b49901608 is in state STARTED 2025-05-13 23:48:38.759001 | orchestrator | 2025-05-13 23:48:38 | INFO  | Task 31c4c6fe-c6bf-49a6-89a1-cb843027ccb9 is in state STARTED 2025-05-13 23:48:38.759348 | orchestrator | 2025-05-13 23:48:38 | INFO  | Task 0a2c0384-b5e6-40f8-80bb-cff2ea196bc8 is in state STARTED 2025-05-13 23:48:38.759381 | orchestrator | 2025-05-13 23:48:38 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:48:41.810676 | orchestrator | 2025-05-13 23:48:41 | INFO  | Task e8fc9090-5017-4c00-8cd9-365226f8f094 is in state STARTED 2025-05-13 23:48:41.813205 | orchestrator | 2025-05-13 23:48:41 | INFO  | Task 9499ae37-5b86-48b7-92cb-0b1b49901608 is in state STARTED 2025-05-13 23:48:41.814826 | orchestrator | 2025-05-13 23:48:41 | INFO  | Task 31c4c6fe-c6bf-49a6-89a1-cb843027ccb9 is in state STARTED 2025-05-13 23:48:41.816620 | orchestrator | 2025-05-13 23:48:41 | INFO  | Task 0a2c0384-b5e6-40f8-80bb-cff2ea196bc8 is in state STARTED 2025-05-13 23:48:41.816664 | orchestrator | 2025-05-13 23:48:41 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:48:44.867876 | orchestrator | 2025-05-13 23:48:44 | INFO  | Task e8fc9090-5017-4c00-8cd9-365226f8f094 is in state STARTED 2025-05-13 23:48:44.870156 | orchestrator | 2025-05-13 23:48:44 | INFO  | Task 9499ae37-5b86-48b7-92cb-0b1b49901608 is in state STARTED 2025-05-13 23:48:44.873727 | orchestrator | 2025-05-13 23:48:44 | INFO  | Task 31c4c6fe-c6bf-49a6-89a1-cb843027ccb9 is in state STARTED 2025-05-13 23:48:44.875494 | orchestrator | 2025-05-13 23:48:44 | INFO  | Task 0a2c0384-b5e6-40f8-80bb-cff2ea196bc8 is in state STARTED 2025-05-13 23:48:44.875585 | orchestrator | 2025-05-13 23:48:44 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:48:47.934382 | orchestrator | 2025-05-13 23:48:47 | INFO  | Task e8fc9090-5017-4c00-8cd9-365226f8f094 is in state SUCCESS 2025-05-13 23:48:47.935951 | orchestrator | 2025-05-13 23:48:47.935994 | orchestrator | 2025-05-13 23:48:47.936007 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-13 23:48:47.936019 | orchestrator | 2025-05-13 23:48:47.936056 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-13 23:48:47.936069 | orchestrator | Tuesday 13 May 2025 23:47:09 +0000 (0:00:00.195) 0:00:00.196 *********** 2025-05-13 23:48:47.936081 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:48:47.936156 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:48:47.936171 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:48:47.936182 | orchestrator | 2025-05-13 23:48:47.936205 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-13 23:48:47.936216 | orchestrator | Tuesday 13 May 2025 23:47:09 +0000 (0:00:00.602) 0:00:00.799 *********** 2025-05-13 23:48:47.936227 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-05-13 23:48:47.936238 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-05-13 23:48:47.936248 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-05-13 23:48:47.936259 | orchestrator | 2025-05-13 23:48:47.936270 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-05-13 23:48:47.936280 | orchestrator | 2025-05-13 23:48:47.936291 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-05-13 23:48:47.936302 | orchestrator | Tuesday 13 May 2025 23:47:10 +0000 (0:00:01.092) 0:00:01.891 *********** 2025-05-13 23:48:47.936313 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:48:47.936323 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:48:47.936334 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:48:47.936344 | orchestrator | 2025-05-13 23:48:47.936355 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 23:48:47.936366 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 23:48:47.936379 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 23:48:47.936390 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 23:48:47.936401 | orchestrator | 2025-05-13 23:48:47.936411 | orchestrator | 2025-05-13 23:48:47.936422 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 23:48:47.936433 | orchestrator | Tuesday 13 May 2025 23:48:17 +0000 (0:01:06.952) 0:01:08.844 *********** 2025-05-13 23:48:47.936444 | orchestrator | =============================================================================== 2025-05-13 23:48:47.936454 | orchestrator | Waiting for Keystone public port to be UP ------------------------------ 66.95s 2025-05-13 23:48:47.936465 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.09s 2025-05-13 23:48:47.936476 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.60s 2025-05-13 23:48:47.936486 | orchestrator | 2025-05-13 23:48:47.936497 | orchestrator | 2025-05-13 23:48:47.936533 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-05-13 23:48:47.936544 | orchestrator | 2025-05-13 23:48:47.936555 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-05-13 23:48:47.936566 | orchestrator | Tuesday 13 May 2025 23:47:09 +0000 (0:00:00.316) 0:00:00.316 *********** 2025-05-13 23:48:47.936577 | orchestrator | changed: [testbed-manager] 2025-05-13 23:48:47.936588 | orchestrator | 2025-05-13 23:48:47.936599 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-05-13 23:48:47.936609 | orchestrator | Tuesday 13 May 2025 23:47:11 +0000 (0:00:02.309) 0:00:02.626 *********** 2025-05-13 23:48:47.936620 | orchestrator | changed: [testbed-manager] 2025-05-13 23:48:47.936631 | orchestrator | 2025-05-13 23:48:47.936642 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-05-13 23:48:47.936653 | orchestrator | Tuesday 13 May 2025 23:47:12 +0000 (0:00:01.316) 0:00:03.942 *********** 2025-05-13 23:48:47.936663 | orchestrator | changed: [testbed-manager] 2025-05-13 23:48:47.936674 | orchestrator | 2025-05-13 23:48:47.936685 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-05-13 23:48:47.936696 | orchestrator | Tuesday 13 May 2025 23:47:14 +0000 (0:00:01.359) 0:00:05.302 *********** 2025-05-13 23:48:47.936706 | orchestrator | changed: [testbed-manager] 2025-05-13 23:48:47.936773 | orchestrator | 2025-05-13 23:48:47.936784 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-05-13 23:48:47.936803 | orchestrator | Tuesday 13 May 2025 23:47:15 +0000 (0:00:01.356) 0:00:06.659 *********** 2025-05-13 23:48:47.936814 | orchestrator | changed: [testbed-manager] 2025-05-13 23:48:47.936825 | orchestrator | 2025-05-13 23:48:47.936836 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-05-13 23:48:47.936847 | orchestrator | Tuesday 13 May 2025 23:47:16 +0000 (0:00:01.191) 0:00:07.850 *********** 2025-05-13 23:48:47.936857 | orchestrator | changed: [testbed-manager] 2025-05-13 23:48:47.936868 | orchestrator | 2025-05-13 23:48:47.936878 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-05-13 23:48:47.936889 | orchestrator | Tuesday 13 May 2025 23:47:17 +0000 (0:00:00.995) 0:00:08.845 *********** 2025-05-13 23:48:47.936899 | orchestrator | changed: [testbed-manager] 2025-05-13 23:48:47.936910 | orchestrator | 2025-05-13 23:48:47.936921 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-05-13 23:48:47.936931 | orchestrator | Tuesday 13 May 2025 23:47:20 +0000 (0:00:02.361) 0:00:11.207 *********** 2025-05-13 23:48:47.936942 | orchestrator | changed: [testbed-manager] 2025-05-13 23:48:47.936953 | orchestrator | 2025-05-13 23:48:47.936963 | orchestrator | TASK [Create admin user] ******************************************************* 2025-05-13 23:48:47.936974 | orchestrator | Tuesday 13 May 2025 23:47:21 +0000 (0:00:01.187) 0:00:12.394 *********** 2025-05-13 23:48:47.936984 | orchestrator | changed: [testbed-manager] 2025-05-13 23:48:47.936995 | orchestrator | 2025-05-13 23:48:47.937006 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-05-13 23:48:47.937016 | orchestrator | Tuesday 13 May 2025 23:48:09 +0000 (0:00:48.187) 0:01:00.582 *********** 2025-05-13 23:48:47.937040 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:48:47.937051 | orchestrator | 2025-05-13 23:48:47.937062 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-05-13 23:48:47.937073 | orchestrator | 2025-05-13 23:48:47.937084 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-05-13 23:48:47.937095 | orchestrator | Tuesday 13 May 2025 23:48:09 +0000 (0:00:00.181) 0:01:00.763 *********** 2025-05-13 23:48:47.937105 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:48:47.937116 | orchestrator | 2025-05-13 23:48:47.937131 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-05-13 23:48:47.937142 | orchestrator | 2025-05-13 23:48:47.937153 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-05-13 23:48:47.937164 | orchestrator | Tuesday 13 May 2025 23:48:21 +0000 (0:00:11.846) 0:01:12.610 *********** 2025-05-13 23:48:47.937174 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:48:47.937185 | orchestrator | 2025-05-13 23:48:47.937195 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-05-13 23:48:47.937206 | orchestrator | 2025-05-13 23:48:47.937217 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-05-13 23:48:47.937227 | orchestrator | Tuesday 13 May 2025 23:48:32 +0000 (0:00:11.064) 0:01:23.674 *********** 2025-05-13 23:48:47.937238 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:48:47.937248 | orchestrator | 2025-05-13 23:48:47.937259 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 23:48:47.937270 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-13 23:48:47.937281 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 23:48:47.937292 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 23:48:47.937303 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 23:48:47.937314 | orchestrator | 2025-05-13 23:48:47.937331 | orchestrator | 2025-05-13 23:48:47.937341 | orchestrator | 2025-05-13 23:48:47.937352 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 23:48:47.937362 | orchestrator | Tuesday 13 May 2025 23:48:33 +0000 (0:00:00.975) 0:01:24.650 *********** 2025-05-13 23:48:47.937373 | orchestrator | =============================================================================== 2025-05-13 23:48:47.937383 | orchestrator | Create admin user ------------------------------------------------------ 48.19s 2025-05-13 23:48:47.937394 | orchestrator | Restart ceph manager service ------------------------------------------- 23.89s 2025-05-13 23:48:47.937404 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.36s 2025-05-13 23:48:47.937415 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 2.31s 2025-05-13 23:48:47.937425 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.36s 2025-05-13 23:48:47.937436 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.36s 2025-05-13 23:48:47.937446 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.32s 2025-05-13 23:48:47.937456 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.19s 2025-05-13 23:48:47.937467 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.19s 2025-05-13 23:48:47.937477 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.00s 2025-05-13 23:48:47.937488 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.18s 2025-05-13 23:48:47.937498 | orchestrator | 2025-05-13 23:48:47.937538 | orchestrator | 2025-05-13 23:48:47.937558 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-13 23:48:47.937577 | orchestrator | 2025-05-13 23:48:47.937595 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-13 23:48:47.937606 | orchestrator | Tuesday 13 May 2025 23:45:55 +0000 (0:00:00.294) 0:00:00.294 *********** 2025-05-13 23:48:47.937617 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:48:47.937628 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:48:47.937638 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:48:47.937649 | orchestrator | 2025-05-13 23:48:47.937659 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-13 23:48:47.937670 | orchestrator | Tuesday 13 May 2025 23:45:55 +0000 (0:00:00.307) 0:00:00.602 *********** 2025-05-13 23:48:47.937680 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-05-13 23:48:47.937691 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-05-13 23:48:47.937701 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-05-13 23:48:47.937712 | orchestrator | 2025-05-13 23:48:47.937723 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-05-13 23:48:47.937733 | orchestrator | 2025-05-13 23:48:47.937744 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-13 23:48:47.937754 | orchestrator | Tuesday 13 May 2025 23:45:56 +0000 (0:00:00.420) 0:00:01.023 *********** 2025-05-13 23:48:47.937765 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:48:47.937775 | orchestrator | 2025-05-13 23:48:47.937786 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-05-13 23:48:47.937796 | orchestrator | Tuesday 13 May 2025 23:45:56 +0000 (0:00:00.507) 0:00:01.530 *********** 2025-05-13 23:48:47.937828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-13 23:48:47.937853 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-13 23:48:47.937867 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-13 23:48:47.937880 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-13 23:48:47.937892 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-13 23:48:47.937914 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-13 23:48:47.937934 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-13 23:48:47.937946 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-13 23:48:47.937958 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-13 23:48:47.937970 | orchestrator | 2025-05-13 23:48:47.937980 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-05-13 23:48:47.937991 | orchestrator | Tuesday 13 May 2025 23:45:58 +0000 (0:00:01.934) 0:00:03.464 *********** 2025-05-13 23:48:47.938002 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-05-13 23:48:47.938013 | orchestrator | 2025-05-13 23:48:47.938079 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-05-13 23:48:47.938090 | orchestrator | Tuesday 13 May 2025 23:45:59 +0000 (0:00:00.854) 0:00:04.319 *********** 2025-05-13 23:48:47.938101 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:48:47.938112 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:48:47.938123 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:48:47.938133 | orchestrator | 2025-05-13 23:48:47.938144 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-05-13 23:48:47.938154 | orchestrator | Tuesday 13 May 2025 23:45:59 +0000 (0:00:00.511) 0:00:04.830 *********** 2025-05-13 23:48:47.938165 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-13 23:48:47.938176 | orchestrator | 2025-05-13 23:48:47.938187 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-13 23:48:47.938197 | orchestrator | Tuesday 13 May 2025 23:46:00 +0000 (0:00:00.710) 0:00:05.541 *********** 2025-05-13 23:48:47.938208 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:48:47.938219 | orchestrator | 2025-05-13 23:48:47.938229 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-05-13 23:48:47.938240 | orchestrator | Tuesday 13 May 2025 23:46:01 +0000 (0:00:00.620) 0:00:06.162 *********** 2025-05-13 23:48:47.938271 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-13 23:48:47.938285 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-13 23:48:47.938299 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-13 23:48:47.938311 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-13 23:48:47.938323 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-13 23:48:47.938348 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-13 23:48:47.938364 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-13 23:48:47.938376 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-13 23:48:47.938387 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-13 23:48:47.938399 | orchestrator | 2025-05-13 23:48:47.938410 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-05-13 23:48:47.938421 | orchestrator | Tuesday 13 May 2025 23:46:04 +0000 (0:00:03.484) 0:00:09.647 *********** 2025-05-13 23:48:47.938433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-05-13 23:48:47.938451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-13 23:48:47.938478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-13 23:48:47.938491 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:48:47.938553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-05-13 23:48:47.938570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-13 23:48:47.938582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-13 23:48:47.938593 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:48:47.938605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-05-13 23:48:47.938719 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-13 23:48:47.938737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-13 23:48:47.938749 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:48:47.938760 | orchestrator | 2025-05-13 23:48:47.938771 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-05-13 23:48:47.938782 | orchestrator | Tuesday 13 May 2025 23:46:05 +0000 (0:00:00.704) 0:00:10.351 *********** 2025-05-13 23:48:47.938793 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-05-13 23:48:47.938806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-13 23:48:47.938818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-13 23:48:47.938835 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:48:47.938855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-05-13 23:48:47.938872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-13 23:48:47.938884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-13 23:48:47.938896 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:48:47.938907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-05-13 23:48:47.938926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-13 23:48:47.938937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-13 23:48:47.938949 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:48:47.938959 | orchestrator | 2025-05-13 23:48:47.938970 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-05-13 23:48:47.938981 | orchestrator | Tuesday 13 May 2025 23:46:06 +0000 (0:00:00.857) 0:00:11.209 *********** 2025-05-13 23:48:47.939004 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-13 23:48:47.939017 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-13 23:48:47.939030 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-13 23:48:47.939048 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-13 23:48:47.939065 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-13 23:48:47.939081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-13 23:48:47.939091 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-13 23:48:47.939101 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-13 23:48:47.939111 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-13 23:48:47.939128 | orchestrator | 2025-05-13 23:48:47.939138 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-05-13 23:48:47.939148 | orchestrator | Tuesday 13 May 2025 23:46:10 +0000 (0:00:03.947) 0:00:15.156 *********** 2025-05-13 23:48:47.939158 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-13 23:48:47.939174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-13 23:48:47.939189 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-13 23:48:47.939200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-13 23:48:47.939210 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-13 23:48:47.939228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-13 23:48:47.939239 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-13 23:48:47.939260 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-13 23:48:47.939270 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-13 23:48:47.939280 | orchestrator | 2025-05-13 23:48:47.939290 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-05-13 23:48:47.939300 | orchestrator | Tuesday 13 May 2025 23:46:15 +0000 (0:00:05.471) 0:00:20.628 *********** 2025-05-13 23:48:47.939310 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:48:47.939320 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:48:47.939329 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:48:47.939339 | orchestrator | 2025-05-13 23:48:47.939348 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-05-13 23:48:47.939357 | orchestrator | Tuesday 13 May 2025 23:46:17 +0000 (0:00:01.435) 0:00:22.064 *********** 2025-05-13 23:48:47.939373 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:48:47.939405 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:48:47.939415 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:48:47.939424 | orchestrator | 2025-05-13 23:48:47.939434 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-05-13 23:48:47.939444 | orchestrator | Tuesday 13 May 2025 23:46:17 +0000 (0:00:00.589) 0:00:22.653 *********** 2025-05-13 23:48:47.939453 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:48:47.939463 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:48:47.939472 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:48:47.939482 | orchestrator | 2025-05-13 23:48:47.939491 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-05-13 23:48:47.939516 | orchestrator | Tuesday 13 May 2025 23:46:18 +0000 (0:00:00.505) 0:00:23.158 *********** 2025-05-13 23:48:47.939528 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:48:47.939538 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:48:47.939548 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:48:47.939557 | orchestrator | 2025-05-13 23:48:47.939567 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-05-13 23:48:47.939576 | orchestrator | Tuesday 13 May 2025 23:46:18 +0000 (0:00:00.252) 0:00:23.410 *********** 2025-05-13 23:48:47.939586 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-13 23:48:47.939609 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-13 23:48:47.939621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-13 23:48:47.939641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-13 23:48:47.939652 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-13 23:48:47.939662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-13 23:48:47.939673 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-13 23:48:47.939689 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-13 23:48:47.939704 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-13 23:48:47.939721 | orchestrator | 2025-05-13 23:48:47.939731 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-13 23:48:47.939741 | orchestrator | Tuesday 13 May 2025 23:46:20 +0000 (0:00:02.296) 0:00:25.707 *********** 2025-05-13 23:48:47.939750 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:48:47.939760 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:48:47.939769 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:48:47.939778 | orchestrator | 2025-05-13 23:48:47.939788 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-05-13 23:48:47.939797 | orchestrator | Tuesday 13 May 2025 23:46:21 +0000 (0:00:00.352) 0:00:26.060 *********** 2025-05-13 23:48:47.939807 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-05-13 23:48:47.939816 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-05-13 23:48:47.939825 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-05-13 23:48:47.939835 | orchestrator | 2025-05-13 23:48:47.939844 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-05-13 23:48:47.939853 | orchestrator | Tuesday 13 May 2025 23:46:22 +0000 (0:00:01.735) 0:00:27.796 *********** 2025-05-13 23:48:47.939863 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-13 23:48:47.939872 | orchestrator | 2025-05-13 23:48:47.939882 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-05-13 23:48:47.939891 | orchestrator | Tuesday 13 May 2025 23:46:23 +0000 (0:00:00.915) 0:00:28.711 *********** 2025-05-13 23:48:47.939901 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:48:47.939910 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:48:47.939919 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:48:47.939929 | orchestrator | 2025-05-13 23:48:47.939938 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-05-13 23:48:47.939947 | orchestrator | Tuesday 13 May 2025 23:46:24 +0000 (0:00:00.459) 0:00:29.171 *********** 2025-05-13 23:48:47.939957 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-13 23:48:47.939966 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-13 23:48:47.939976 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-13 23:48:47.939985 | orchestrator | 2025-05-13 23:48:47.939994 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-05-13 23:48:47.940004 | orchestrator | Tuesday 13 May 2025 23:46:25 +0000 (0:00:01.203) 0:00:30.375 *********** 2025-05-13 23:48:47.940013 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:48:47.940022 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:48:47.940032 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:48:47.940041 | orchestrator | 2025-05-13 23:48:47.940051 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-05-13 23:48:47.940060 | orchestrator | Tuesday 13 May 2025 23:46:25 +0000 (0:00:00.290) 0:00:30.666 *********** 2025-05-13 23:48:47.940069 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-05-13 23:48:47.940079 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-05-13 23:48:47.940088 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-05-13 23:48:47.940097 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-05-13 23:48:47.940107 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-05-13 23:48:47.940116 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-05-13 23:48:47.940125 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-05-13 23:48:47.940143 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-05-13 23:48:47.940153 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-05-13 23:48:47.940162 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-05-13 23:48:47.940171 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-05-13 23:48:47.940186 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-05-13 23:48:47.940196 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-05-13 23:48:47.940205 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-05-13 23:48:47.940219 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-05-13 23:48:47.940228 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-13 23:48:47.940238 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-13 23:48:47.940247 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-13 23:48:47.940257 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-13 23:48:47.940266 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-13 23:48:47.940275 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-13 23:48:47.940284 | orchestrator | 2025-05-13 23:48:47.940294 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-05-13 23:48:47.940303 | orchestrator | Tuesday 13 May 2025 23:46:34 +0000 (0:00:08.878) 0:00:39.544 *********** 2025-05-13 23:48:47.940313 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-13 23:48:47.940322 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-13 23:48:47.940332 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-13 23:48:47.940341 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-13 23:48:47.940350 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-13 23:48:47.940359 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-13 23:48:47.940368 | orchestrator | 2025-05-13 23:48:47.940378 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-05-13 23:48:47.940387 | orchestrator | Tuesday 13 May 2025 23:46:37 +0000 (0:00:02.632) 0:00:42.177 *********** 2025-05-13 23:48:47.940397 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-13 23:48:47.940414 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-13 23:48:47.940437 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-13 23:48:47.940448 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-13 23:48:47.940458 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-13 23:48:47.940469 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-13 23:48:47.940479 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-13 23:48:47.940495 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-13 23:48:47.940528 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-13 23:48:47.940539 | orchestrator | 2025-05-13 23:48:47.940553 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-13 23:48:47.940563 | orchestrator | Tuesday 13 May 2025 23:46:39 +0000 (0:00:02.251) 0:00:44.428 *********** 2025-05-13 23:48:47.940573 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:48:47.940583 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:48:47.940592 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:48:47.940601 | orchestrator | 2025-05-13 23:48:47.940611 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-05-13 23:48:47.940621 | orchestrator | Tuesday 13 May 2025 23:46:39 +0000 (0:00:00.287) 0:00:44.716 *********** 2025-05-13 23:48:47.940630 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:48:47.940639 | orchestrator | 2025-05-13 23:48:47.940648 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-05-13 23:48:47.940658 | orchestrator | Tuesday 13 May 2025 23:46:41 +0000 (0:00:02.103) 0:00:46.819 *********** 2025-05-13 23:48:47.940667 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:48:47.940676 | orchestrator | 2025-05-13 23:48:47.940686 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-05-13 23:48:47.940695 | orchestrator | Tuesday 13 May 2025 23:46:44 +0000 (0:00:02.518) 0:00:49.337 *********** 2025-05-13 23:48:47.940705 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:48:47.940714 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:48:47.940723 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:48:47.940733 | orchestrator | 2025-05-13 23:48:47.940742 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-05-13 23:48:47.940751 | orchestrator | Tuesday 13 May 2025 23:46:45 +0000 (0:00:00.884) 0:00:50.222 *********** 2025-05-13 23:48:47.940761 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:48:47.940770 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:48:47.940779 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:48:47.940788 | orchestrator | 2025-05-13 23:48:47.940798 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-05-13 23:48:47.940807 | orchestrator | Tuesday 13 May 2025 23:46:45 +0000 (0:00:00.390) 0:00:50.612 *********** 2025-05-13 23:48:47.940817 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:48:47.940833 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:48:47.940842 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:48:47.940852 | orchestrator | 2025-05-13 23:48:47.940861 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-05-13 23:48:47.940871 | orchestrator | Tuesday 13 May 2025 23:46:45 +0000 (0:00:00.350) 0:00:50.963 *********** 2025-05-13 23:48:47.940880 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:48:47.940890 | orchestrator | 2025-05-13 23:48:47.940899 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-05-13 23:48:47.940908 | orchestrator | Tuesday 13 May 2025 23:46:59 +0000 (0:00:13.654) 0:01:04.618 *********** 2025-05-13 23:48:47.940917 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:48:47.940927 | orchestrator | 2025-05-13 23:48:47.940936 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-05-13 23:48:47.940945 | orchestrator | Tuesday 13 May 2025 23:47:08 +0000 (0:00:09.029) 0:01:13.647 *********** 2025-05-13 23:48:47.940954 | orchestrator | 2025-05-13 23:48:47.940964 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-05-13 23:48:47.940973 | orchestrator | Tuesday 13 May 2025 23:47:08 +0000 (0:00:00.259) 0:01:13.907 *********** 2025-05-13 23:48:47.940982 | orchestrator | 2025-05-13 23:48:47.940992 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-05-13 23:48:47.941001 | orchestrator | Tuesday 13 May 2025 23:47:08 +0000 (0:00:00.065) 0:01:13.972 *********** 2025-05-13 23:48:47.941010 | orchestrator | 2025-05-13 23:48:47.941020 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-05-13 23:48:47.941029 | orchestrator | Tuesday 13 May 2025 23:47:09 +0000 (0:00:00.067) 0:01:14.039 *********** 2025-05-13 23:48:47.941038 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:48:47.941048 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:48:47.941057 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:48:47.941066 | orchestrator | 2025-05-13 23:48:47.941075 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-05-13 23:48:47.941085 | orchestrator | Tuesday 13 May 2025 23:47:52 +0000 (0:00:43.668) 0:01:57.708 *********** 2025-05-13 23:48:47.941094 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:48:47.941103 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:48:47.941112 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:48:47.941122 | orchestrator | 2025-05-13 23:48:47.941131 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-05-13 23:48:47.941140 | orchestrator | Tuesday 13 May 2025 23:47:57 +0000 (0:00:04.748) 0:02:02.457 *********** 2025-05-13 23:48:47.941150 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:48:47.941159 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:48:47.941169 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:48:47.941178 | orchestrator | 2025-05-13 23:48:47.941188 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-13 23:48:47.941197 | orchestrator | Tuesday 13 May 2025 23:48:08 +0000 (0:00:11.437) 0:02:13.895 *********** 2025-05-13 23:48:47.941206 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:48:47.941216 | orchestrator | 2025-05-13 23:48:47.941225 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-05-13 23:48:47.941234 | orchestrator | Tuesday 13 May 2025 23:48:09 +0000 (0:00:00.761) 0:02:14.656 *********** 2025-05-13 23:48:47.941244 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:48:47.941253 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:48:47.941262 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:48:47.941272 | orchestrator | 2025-05-13 23:48:47.941287 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-05-13 23:48:47.941297 | orchestrator | Tuesday 13 May 2025 23:48:10 +0000 (0:00:00.752) 0:02:15.409 *********** 2025-05-13 23:48:47.941306 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:48:47.941315 | orchestrator | 2025-05-13 23:48:47.941325 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-05-13 23:48:47.941339 | orchestrator | Tuesday 13 May 2025 23:48:12 +0000 (0:00:01.741) 0:02:17.151 *********** 2025-05-13 23:48:47.941353 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-05-13 23:48:47.941362 | orchestrator | 2025-05-13 23:48:47.941372 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-05-13 23:48:47.941381 | orchestrator | Tuesday 13 May 2025 23:48:22 +0000 (0:00:10.108) 0:02:27.259 *********** 2025-05-13 23:48:47.941391 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-05-13 23:48:47.941400 | orchestrator | 2025-05-13 23:48:47.941409 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-05-13 23:48:47.941419 | orchestrator | Tuesday 13 May 2025 23:48:35 +0000 (0:00:13.593) 0:02:40.853 *********** 2025-05-13 23:48:47.941428 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-05-13 23:48:47.941437 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-05-13 23:48:47.941446 | orchestrator | 2025-05-13 23:48:47.941456 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-05-13 23:48:47.941465 | orchestrator | Tuesday 13 May 2025 23:48:41 +0000 (0:00:05.635) 0:02:46.488 *********** 2025-05-13 23:48:47.941474 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:48:47.941484 | orchestrator | 2025-05-13 23:48:47.941493 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-05-13 23:48:47.941553 | orchestrator | Tuesday 13 May 2025 23:48:41 +0000 (0:00:00.121) 0:02:46.610 *********** 2025-05-13 23:48:47.941571 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:48:47.941581 | orchestrator | 2025-05-13 23:48:47.941591 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-05-13 23:48:47.941600 | orchestrator | Tuesday 13 May 2025 23:48:41 +0000 (0:00:00.107) 0:02:46.717 *********** 2025-05-13 23:48:47.941610 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:48:47.941619 | orchestrator | 2025-05-13 23:48:47.941628 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-05-13 23:48:47.941638 | orchestrator | Tuesday 13 May 2025 23:48:41 +0000 (0:00:00.112) 0:02:46.830 *********** 2025-05-13 23:48:47.941647 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:48:47.941656 | orchestrator | 2025-05-13 23:48:47.941666 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-05-13 23:48:47.941675 | orchestrator | Tuesday 13 May 2025 23:48:42 +0000 (0:00:00.344) 0:02:47.174 *********** 2025-05-13 23:48:47.941684 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:48:47.941694 | orchestrator | 2025-05-13 23:48:47.941703 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-13 23:48:47.941712 | orchestrator | Tuesday 13 May 2025 23:48:44 +0000 (0:00:02.834) 0:02:50.009 *********** 2025-05-13 23:48:47.941722 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:48:47.941731 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:48:47.941740 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:48:47.941750 | orchestrator | 2025-05-13 23:48:47.941759 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 23:48:47.941769 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-05-13 23:48:47.941778 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-05-13 23:48:47.941788 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-05-13 23:48:47.941797 | orchestrator | 2025-05-13 23:48:47.941807 | orchestrator | 2025-05-13 23:48:47.941816 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 23:48:47.941825 | orchestrator | Tuesday 13 May 2025 23:48:45 +0000 (0:00:00.679) 0:02:50.689 *********** 2025-05-13 23:48:47.941842 | orchestrator | =============================================================================== 2025-05-13 23:48:47.941852 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 43.67s 2025-05-13 23:48:47.941861 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 13.66s 2025-05-13 23:48:47.941870 | orchestrator | service-ks-register : keystone | Creating services --------------------- 13.59s 2025-05-13 23:48:47.941880 | orchestrator | keystone : Restart keystone container ---------------------------------- 11.44s 2025-05-13 23:48:47.941889 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 10.11s 2025-05-13 23:48:47.941898 | orchestrator | keystone : Running Keystone fernet bootstrap container ------------------ 9.03s 2025-05-13 23:48:47.941907 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.88s 2025-05-13 23:48:47.941916 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 5.64s 2025-05-13 23:48:47.941926 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.47s 2025-05-13 23:48:47.941935 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 4.75s 2025-05-13 23:48:47.941944 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.95s 2025-05-13 23:48:47.941953 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.48s 2025-05-13 23:48:47.941969 | orchestrator | keystone : Creating default user role ----------------------------------- 2.83s 2025-05-13 23:48:47.941979 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.63s 2025-05-13 23:48:47.941989 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.52s 2025-05-13 23:48:47.941998 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.30s 2025-05-13 23:48:47.942007 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.25s 2025-05-13 23:48:47.942045 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.10s 2025-05-13 23:48:47.942057 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.93s 2025-05-13 23:48:47.942067 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.74s 2025-05-13 23:48:47.942076 | orchestrator | 2025-05-13 23:48:47 | INFO  | Task 9499ae37-5b86-48b7-92cb-0b1b49901608 is in state STARTED 2025-05-13 23:48:47.942361 | orchestrator | 2025-05-13 23:48:47 | INFO  | Task 31c4c6fe-c6bf-49a6-89a1-cb843027ccb9 is in state STARTED 2025-05-13 23:48:47.942436 | orchestrator | 2025-05-13 23:48:47 | INFO  | Task 15721540-d0a5-4152-a6e7-334d62efbcaf is in state STARTED 2025-05-13 23:48:47.943577 | orchestrator | 2025-05-13 23:48:47 | INFO  | Task 0a2c0384-b5e6-40f8-80bb-cff2ea196bc8 is in state STARTED 2025-05-13 23:48:47.943613 | orchestrator | 2025-05-13 23:48:47 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:48:51.020772 | orchestrator | 2025-05-13 23:48:51 | INFO  | Task 9499ae37-5b86-48b7-92cb-0b1b49901608 is in state STARTED 2025-05-13 23:48:51.021586 | orchestrator | 2025-05-13 23:48:51 | INFO  | Task 31c4c6fe-c6bf-49a6-89a1-cb843027ccb9 is in state STARTED 2025-05-13 23:48:51.022209 | orchestrator | 2025-05-13 23:48:51 | INFO  | Task 15721540-d0a5-4152-a6e7-334d62efbcaf is in state STARTED 2025-05-13 23:48:51.030900 | orchestrator | 2025-05-13 23:48:51 | INFO  | Task 0a2c0384-b5e6-40f8-80bb-cff2ea196bc8 is in state STARTED 2025-05-13 23:48:51.032584 | orchestrator | 2025-05-13 23:48:51 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:48:54.081791 | orchestrator | 2025-05-13 23:48:54 | INFO  | Task 9499ae37-5b86-48b7-92cb-0b1b49901608 is in state STARTED 2025-05-13 23:48:54.081996 | orchestrator | 2025-05-13 23:48:54 | INFO  | Task 31c4c6fe-c6bf-49a6-89a1-cb843027ccb9 is in state STARTED 2025-05-13 23:48:54.085667 | orchestrator | 2025-05-13 23:48:54 | INFO  | Task 15721540-d0a5-4152-a6e7-334d62efbcaf is in state STARTED 2025-05-13 23:48:54.086060 | orchestrator | 2025-05-13 23:48:54 | INFO  | Task 0a2c0384-b5e6-40f8-80bb-cff2ea196bc8 is in state STARTED 2025-05-13 23:48:54.086095 | orchestrator | 2025-05-13 23:48:54 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:48:57.132219 | orchestrator | 2025-05-13 23:48:57 | INFO  | Task 9499ae37-5b86-48b7-92cb-0b1b49901608 is in state STARTED 2025-05-13 23:48:57.132951 | orchestrator | 2025-05-13 23:48:57 | INFO  | Task 31c4c6fe-c6bf-49a6-89a1-cb843027ccb9 is in state STARTED 2025-05-13 23:48:57.134131 | orchestrator | 2025-05-13 23:48:57 | INFO  | Task 15721540-d0a5-4152-a6e7-334d62efbcaf is in state STARTED 2025-05-13 23:48:57.135066 | orchestrator | 2025-05-13 23:48:57 | INFO  | Task 0a2c0384-b5e6-40f8-80bb-cff2ea196bc8 is in state STARTED 2025-05-13 23:48:57.135092 | orchestrator | 2025-05-13 23:48:57 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:49:00.187885 | orchestrator | 2025-05-13 23:49:00 | INFO  | Task 9499ae37-5b86-48b7-92cb-0b1b49901608 is in state STARTED 2025-05-13 23:49:00.187993 | orchestrator | 2025-05-13 23:49:00 | INFO  | Task 31c4c6fe-c6bf-49a6-89a1-cb843027ccb9 is in state STARTED 2025-05-13 23:49:00.191728 | orchestrator | 2025-05-13 23:49:00 | INFO  | Task 15721540-d0a5-4152-a6e7-334d62efbcaf is in state STARTED 2025-05-13 23:49:00.191978 | orchestrator | 2025-05-13 23:49:00 | INFO  | Task 0a2c0384-b5e6-40f8-80bb-cff2ea196bc8 is in state STARTED 2025-05-13 23:49:00.192196 | orchestrator | 2025-05-13 23:49:00 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:49:03.224816 | orchestrator | 2025-05-13 23:49:03 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:49:03.225864 | orchestrator | 2025-05-13 23:49:03 | INFO  | Task 9499ae37-5b86-48b7-92cb-0b1b49901608 is in state STARTED 2025-05-13 23:49:03.226688 | orchestrator | 2025-05-13 23:49:03 | INFO  | Task 31c4c6fe-c6bf-49a6-89a1-cb843027ccb9 is in state STARTED 2025-05-13 23:49:03.227894 | orchestrator | 2025-05-13 23:49:03 | INFO  | Task 15721540-d0a5-4152-a6e7-334d62efbcaf is in state STARTED 2025-05-13 23:49:03.230274 | orchestrator | 2025-05-13 23:49:03 | INFO  | Task 0a2c0384-b5e6-40f8-80bb-cff2ea196bc8 is in state SUCCESS 2025-05-13 23:49:03.230359 | orchestrator | 2025-05-13 23:49:03 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:49:06.266861 | orchestrator | 2025-05-13 23:49:06 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:49:06.267935 | orchestrator | 2025-05-13 23:49:06 | INFO  | Task 9499ae37-5b86-48b7-92cb-0b1b49901608 is in state STARTED 2025-05-13 23:49:06.267994 | orchestrator | 2025-05-13 23:49:06 | INFO  | Task 31c4c6fe-c6bf-49a6-89a1-cb843027ccb9 is in state STARTED 2025-05-13 23:49:06.268492 | orchestrator | 2025-05-13 23:49:06 | INFO  | Task 15721540-d0a5-4152-a6e7-334d62efbcaf is in state STARTED 2025-05-13 23:49:06.268612 | orchestrator | 2025-05-13 23:49:06 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:49:09.300226 | orchestrator | 2025-05-13 23:49:09 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:49:09.300545 | orchestrator | 2025-05-13 23:49:09 | INFO  | Task 9499ae37-5b86-48b7-92cb-0b1b49901608 is in state STARTED 2025-05-13 23:49:09.304939 | orchestrator | 2025-05-13 23:49:09 | INFO  | Task 31c4c6fe-c6bf-49a6-89a1-cb843027ccb9 is in state STARTED 2025-05-13 23:49:09.309074 | orchestrator | 2025-05-13 23:49:09 | INFO  | Task 15721540-d0a5-4152-a6e7-334d62efbcaf is in state STARTED 2025-05-13 23:49:09.309222 | orchestrator | 2025-05-13 23:49:09 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:49:12.355573 | orchestrator | 2025-05-13 23:49:12 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:49:12.357614 | orchestrator | 2025-05-13 23:49:12 | INFO  | Task 9499ae37-5b86-48b7-92cb-0b1b49901608 is in state STARTED 2025-05-13 23:49:12.359576 | orchestrator | 2025-05-13 23:49:12 | INFO  | Task 31c4c6fe-c6bf-49a6-89a1-cb843027ccb9 is in state STARTED 2025-05-13 23:49:12.361188 | orchestrator | 2025-05-13 23:49:12 | INFO  | Task 15721540-d0a5-4152-a6e7-334d62efbcaf is in state STARTED 2025-05-13 23:49:12.361312 | orchestrator | 2025-05-13 23:49:12 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:49:15.403260 | orchestrator | 2025-05-13 23:49:15 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:49:15.405191 | orchestrator | 2025-05-13 23:49:15 | INFO  | Task 9499ae37-5b86-48b7-92cb-0b1b49901608 is in state STARTED 2025-05-13 23:49:15.405880 | orchestrator | 2025-05-13 23:49:15 | INFO  | Task 31c4c6fe-c6bf-49a6-89a1-cb843027ccb9 is in state STARTED 2025-05-13 23:49:15.408130 | orchestrator | 2025-05-13 23:49:15 | INFO  | Task 15721540-d0a5-4152-a6e7-334d62efbcaf is in state STARTED 2025-05-13 23:49:15.408268 | orchestrator | 2025-05-13 23:49:15 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:49:18.452123 | orchestrator | 2025-05-13 23:49:18 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:49:18.457130 | orchestrator | 2025-05-13 23:49:18 | INFO  | Task 9499ae37-5b86-48b7-92cb-0b1b49901608 is in state STARTED 2025-05-13 23:49:18.458769 | orchestrator | 2025-05-13 23:49:18 | INFO  | Task 31c4c6fe-c6bf-49a6-89a1-cb843027ccb9 is in state STARTED 2025-05-13 23:49:18.460380 | orchestrator | 2025-05-13 23:49:18 | INFO  | Task 15721540-d0a5-4152-a6e7-334d62efbcaf is in state STARTED 2025-05-13 23:49:18.460497 | orchestrator | 2025-05-13 23:49:18 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:49:21.499139 | orchestrator | 2025-05-13 23:49:21 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:49:21.499389 | orchestrator | 2025-05-13 23:49:21 | INFO  | Task 9499ae37-5b86-48b7-92cb-0b1b49901608 is in state STARTED 2025-05-13 23:49:21.500045 | orchestrator | 2025-05-13 23:49:21 | INFO  | Task 31c4c6fe-c6bf-49a6-89a1-cb843027ccb9 is in state STARTED 2025-05-13 23:49:21.502187 | orchestrator | 2025-05-13 23:49:21 | INFO  | Task 15721540-d0a5-4152-a6e7-334d62efbcaf is in state STARTED 2025-05-13 23:49:21.502271 | orchestrator | 2025-05-13 23:49:21 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:49:24.526744 | orchestrator | 2025-05-13 23:49:24 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:49:24.527097 | orchestrator | 2025-05-13 23:49:24 | INFO  | Task 9499ae37-5b86-48b7-92cb-0b1b49901608 is in state STARTED 2025-05-13 23:49:24.527884 | orchestrator | 2025-05-13 23:49:24 | INFO  | Task 31c4c6fe-c6bf-49a6-89a1-cb843027ccb9 is in state STARTED 2025-05-13 23:49:24.528501 | orchestrator | 2025-05-13 23:49:24 | INFO  | Task 15721540-d0a5-4152-a6e7-334d62efbcaf is in state STARTED 2025-05-13 23:49:24.528617 | orchestrator | 2025-05-13 23:49:24 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:49:27.570685 | orchestrator | 2025-05-13 23:49:27 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:49:27.570788 | orchestrator | 2025-05-13 23:49:27 | INFO  | Task 9499ae37-5b86-48b7-92cb-0b1b49901608 is in state STARTED 2025-05-13 23:49:27.572571 | orchestrator | 2025-05-13 23:49:27 | INFO  | Task 31c4c6fe-c6bf-49a6-89a1-cb843027ccb9 is in state STARTED 2025-05-13 23:49:27.573402 | orchestrator | 2025-05-13 23:49:27 | INFO  | Task 15721540-d0a5-4152-a6e7-334d62efbcaf is in state STARTED 2025-05-13 23:49:27.573564 | orchestrator | 2025-05-13 23:49:27 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:49:30.614146 | orchestrator | 2025-05-13 23:49:30 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:49:30.614368 | orchestrator | 2025-05-13 23:49:30 | INFO  | Task 9499ae37-5b86-48b7-92cb-0b1b49901608 is in state STARTED 2025-05-13 23:49:30.616786 | orchestrator | 2025-05-13 23:49:30 | INFO  | Task 31c4c6fe-c6bf-49a6-89a1-cb843027ccb9 is in state STARTED 2025-05-13 23:49:30.617017 | orchestrator | 2025-05-13 23:49:30 | INFO  | Task 15721540-d0a5-4152-a6e7-334d62efbcaf is in state STARTED 2025-05-13 23:49:30.617041 | orchestrator | 2025-05-13 23:49:30 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:49:33.662930 | orchestrator | 2025-05-13 23:49:33 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:49:33.663045 | orchestrator | 2025-05-13 23:49:33 | INFO  | Task 9499ae37-5b86-48b7-92cb-0b1b49901608 is in state STARTED 2025-05-13 23:49:33.663546 | orchestrator | 2025-05-13 23:49:33 | INFO  | Task 31c4c6fe-c6bf-49a6-89a1-cb843027ccb9 is in state STARTED 2025-05-13 23:49:33.664060 | orchestrator | 2025-05-13 23:49:33 | INFO  | Task 15721540-d0a5-4152-a6e7-334d62efbcaf is in state STARTED 2025-05-13 23:49:33.664114 | orchestrator | 2025-05-13 23:49:33 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:49:36.693756 | orchestrator | 2025-05-13 23:49:36 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:49:36.694012 | orchestrator | 2025-05-13 23:49:36 | INFO  | Task 9499ae37-5b86-48b7-92cb-0b1b49901608 is in state STARTED 2025-05-13 23:49:36.694595 | orchestrator | 2025-05-13 23:49:36 | INFO  | Task 31c4c6fe-c6bf-49a6-89a1-cb843027ccb9 is in state STARTED 2025-05-13 23:49:36.695228 | orchestrator | 2025-05-13 23:49:36 | INFO  | Task 15721540-d0a5-4152-a6e7-334d62efbcaf is in state STARTED 2025-05-13 23:49:36.695251 | orchestrator | 2025-05-13 23:49:36 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:49:39.740777 | orchestrator | 2025-05-13 23:49:39 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:49:39.742089 | orchestrator | 2025-05-13 23:49:39 | INFO  | Task 9499ae37-5b86-48b7-92cb-0b1b49901608 is in state STARTED 2025-05-13 23:49:39.744025 | orchestrator | 2025-05-13 23:49:39 | INFO  | Task 31c4c6fe-c6bf-49a6-89a1-cb843027ccb9 is in state STARTED 2025-05-13 23:49:39.746195 | orchestrator | 2025-05-13 23:49:39 | INFO  | Task 15721540-d0a5-4152-a6e7-334d62efbcaf is in state STARTED 2025-05-13 23:49:39.746230 | orchestrator | 2025-05-13 23:49:39 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:49:42.794331 | orchestrator | 2025-05-13 23:49:42 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:49:42.794502 | orchestrator | 2025-05-13 23:49:42 | INFO  | Task 9499ae37-5b86-48b7-92cb-0b1b49901608 is in state STARTED 2025-05-13 23:49:42.799131 | orchestrator | 2025-05-13 23:49:42 | INFO  | Task 31c4c6fe-c6bf-49a6-89a1-cb843027ccb9 is in state STARTED 2025-05-13 23:49:42.800290 | orchestrator | 2025-05-13 23:49:42 | INFO  | Task 15721540-d0a5-4152-a6e7-334d62efbcaf is in state STARTED 2025-05-13 23:49:42.800323 | orchestrator | 2025-05-13 23:49:42 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:49:45.846439 | orchestrator | 2025-05-13 23:49:45 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:49:45.846812 | orchestrator | 2025-05-13 23:49:45 | INFO  | Task 9499ae37-5b86-48b7-92cb-0b1b49901608 is in state STARTED 2025-05-13 23:49:45.847625 | orchestrator | 2025-05-13 23:49:45 | INFO  | Task 31c4c6fe-c6bf-49a6-89a1-cb843027ccb9 is in state STARTED 2025-05-13 23:49:45.849421 | orchestrator | 2025-05-13 23:49:45 | INFO  | Task 15721540-d0a5-4152-a6e7-334d62efbcaf is in state STARTED 2025-05-13 23:49:45.849446 | orchestrator | 2025-05-13 23:49:45 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:49:48.882275 | orchestrator | 2025-05-13 23:49:48 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:49:48.883913 | orchestrator | 2025-05-13 23:49:48 | INFO  | Task 9499ae37-5b86-48b7-92cb-0b1b49901608 is in state STARTED 2025-05-13 23:49:48.884820 | orchestrator | 2025-05-13 23:49:48 | INFO  | Task 31c4c6fe-c6bf-49a6-89a1-cb843027ccb9 is in state STARTED 2025-05-13 23:49:48.885549 | orchestrator | 2025-05-13 23:49:48 | INFO  | Task 15721540-d0a5-4152-a6e7-334d62efbcaf is in state STARTED 2025-05-13 23:49:48.885600 | orchestrator | 2025-05-13 23:49:48 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:49:51.918822 | orchestrator | 2025-05-13 23:49:51 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:49:51.919348 | orchestrator | 2025-05-13 23:49:51 | INFO  | Task 9499ae37-5b86-48b7-92cb-0b1b49901608 is in state STARTED 2025-05-13 23:49:51.920018 | orchestrator | 2025-05-13 23:49:51 | INFO  | Task 31c4c6fe-c6bf-49a6-89a1-cb843027ccb9 is in state STARTED 2025-05-13 23:49:51.920670 | orchestrator | 2025-05-13 23:49:51 | INFO  | Task 15721540-d0a5-4152-a6e7-334d62efbcaf is in state STARTED 2025-05-13 23:49:51.920691 | orchestrator | 2025-05-13 23:49:51 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:49:54.946624 | orchestrator | 2025-05-13 23:49:54 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:49:54.946834 | orchestrator | 2025-05-13 23:49:54 | INFO  | Task 9499ae37-5b86-48b7-92cb-0b1b49901608 is in state STARTED 2025-05-13 23:49:54.947548 | orchestrator | 2025-05-13 23:49:54 | INFO  | Task 31c4c6fe-c6bf-49a6-89a1-cb843027ccb9 is in state STARTED 2025-05-13 23:49:54.948191 | orchestrator | 2025-05-13 23:49:54 | INFO  | Task 15721540-d0a5-4152-a6e7-334d62efbcaf is in state STARTED 2025-05-13 23:49:54.948215 | orchestrator | 2025-05-13 23:49:54 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:49:57.977305 | orchestrator | 2025-05-13 23:49:57 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:49:57.979017 | orchestrator | 2025-05-13 23:49:57 | INFO  | Task 9499ae37-5b86-48b7-92cb-0b1b49901608 is in state STARTED 2025-05-13 23:49:57.979663 | orchestrator | 2025-05-13 23:49:57 | INFO  | Task 31c4c6fe-c6bf-49a6-89a1-cb843027ccb9 is in state STARTED 2025-05-13 23:49:57.980588 | orchestrator | 2025-05-13 23:49:57 | INFO  | Task 15721540-d0a5-4152-a6e7-334d62efbcaf is in state STARTED 2025-05-13 23:49:57.980618 | orchestrator | 2025-05-13 23:49:57 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:50:01.043278 | orchestrator | 2025-05-13 23:50:01 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:50:01.044859 | orchestrator | 2025-05-13 23:50:01 | INFO  | Task 9499ae37-5b86-48b7-92cb-0b1b49901608 is in state STARTED 2025-05-13 23:50:01.044969 | orchestrator | 2025-05-13 23:50:01 | INFO  | Task 31c4c6fe-c6bf-49a6-89a1-cb843027ccb9 is in state STARTED 2025-05-13 23:50:01.046241 | orchestrator | 2025-05-13 23:50:01 | INFO  | Task 15721540-d0a5-4152-a6e7-334d62efbcaf is in state STARTED 2025-05-13 23:50:01.046276 | orchestrator | 2025-05-13 23:50:01 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:50:04.084782 | orchestrator | 2025-05-13 23:50:04 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:50:04.086073 | orchestrator | 2025-05-13 23:50:04 | INFO  | Task 9499ae37-5b86-48b7-92cb-0b1b49901608 is in state STARTED 2025-05-13 23:50:04.086882 | orchestrator | 2025-05-13 23:50:04 | INFO  | Task 31c4c6fe-c6bf-49a6-89a1-cb843027ccb9 is in state STARTED 2025-05-13 23:50:04.088129 | orchestrator | 2025-05-13 23:50:04 | INFO  | Task 15721540-d0a5-4152-a6e7-334d62efbcaf is in state STARTED 2025-05-13 23:50:04.088157 | orchestrator | 2025-05-13 23:50:04 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:50:07.114939 | orchestrator | 2025-05-13 23:50:07 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:50:07.116269 | orchestrator | 2025-05-13 23:50:07 | INFO  | Task 9499ae37-5b86-48b7-92cb-0b1b49901608 is in state STARTED 2025-05-13 23:50:07.116821 | orchestrator | 2025-05-13 23:50:07 | INFO  | Task 31c4c6fe-c6bf-49a6-89a1-cb843027ccb9 is in state STARTED 2025-05-13 23:50:07.117507 | orchestrator | 2025-05-13 23:50:07 | INFO  | Task 15721540-d0a5-4152-a6e7-334d62efbcaf is in state STARTED 2025-05-13 23:50:07.117602 | orchestrator | 2025-05-13 23:50:07 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:50:10.157647 | orchestrator | 2025-05-13 23:50:10 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:50:10.160436 | orchestrator | 2025-05-13 23:50:10 | INFO  | Task 9499ae37-5b86-48b7-92cb-0b1b49901608 is in state STARTED 2025-05-13 23:50:10.162697 | orchestrator | 2025-05-13 23:50:10 | INFO  | Task 31c4c6fe-c6bf-49a6-89a1-cb843027ccb9 is in state STARTED 2025-05-13 23:50:10.164725 | orchestrator | 2025-05-13 23:50:10 | INFO  | Task 15721540-d0a5-4152-a6e7-334d62efbcaf is in state STARTED 2025-05-13 23:50:10.164797 | orchestrator | 2025-05-13 23:50:10 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:50:13.206392 | orchestrator | 2025-05-13 23:50:13 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:50:13.207809 | orchestrator | 2025-05-13 23:50:13 | INFO  | Task 9499ae37-5b86-48b7-92cb-0b1b49901608 is in state STARTED 2025-05-13 23:50:13.209579 | orchestrator | 2025-05-13 23:50:13 | INFO  | Task 31c4c6fe-c6bf-49a6-89a1-cb843027ccb9 is in state STARTED 2025-05-13 23:50:13.211889 | orchestrator | 2025-05-13 23:50:13 | INFO  | Task 15721540-d0a5-4152-a6e7-334d62efbcaf is in state STARTED 2025-05-13 23:50:13.212040 | orchestrator | 2025-05-13 23:50:13 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:50:16.256215 | orchestrator | 2025-05-13 23:50:16 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:50:16.257524 | orchestrator | 2025-05-13 23:50:16 | INFO  | Task 9499ae37-5b86-48b7-92cb-0b1b49901608 is in state STARTED 2025-05-13 23:50:16.258562 | orchestrator | 2025-05-13 23:50:16 | INFO  | Task 31c4c6fe-c6bf-49a6-89a1-cb843027ccb9 is in state STARTED 2025-05-13 23:50:16.262400 | orchestrator | 2025-05-13 23:50:16 | INFO  | Task 15721540-d0a5-4152-a6e7-334d62efbcaf is in state STARTED 2025-05-13 23:50:16.262438 | orchestrator | 2025-05-13 23:50:16 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:50:19.310873 | orchestrator | 2025-05-13 23:50:19 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:50:19.316372 | orchestrator | 2025-05-13 23:50:19.316449 | orchestrator | 2025-05-13 23:50:19.316464 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-13 23:50:19.316550 | orchestrator | 2025-05-13 23:50:19.316612 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-13 23:50:19.316653 | orchestrator | Tuesday 13 May 2025 23:48:23 +0000 (0:00:00.267) 0:00:00.267 *********** 2025-05-13 23:50:19.316665 | orchestrator | ok: [testbed-manager] 2025-05-13 23:50:19.316678 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:50:19.316689 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:50:19.316699 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:50:19.316799 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:50:19.316811 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:50:19.316821 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:50:19.316832 | orchestrator | 2025-05-13 23:50:19.316844 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-13 23:50:19.316856 | orchestrator | Tuesday 13 May 2025 23:48:24 +0000 (0:00:00.895) 0:00:01.163 *********** 2025-05-13 23:50:19.316867 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-05-13 23:50:19.316878 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-05-13 23:50:19.316889 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-05-13 23:50:19.316900 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-05-13 23:50:19.316911 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-05-13 23:50:19.316923 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-05-13 23:50:19.316935 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-05-13 23:50:19.316947 | orchestrator | 2025-05-13 23:50:19.316959 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-05-13 23:50:19.316971 | orchestrator | 2025-05-13 23:50:19.316983 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-05-13 23:50:19.316995 | orchestrator | Tuesday 13 May 2025 23:48:25 +0000 (0:00:00.928) 0:00:02.092 *********** 2025-05-13 23:50:19.317008 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:50:19.317021 | orchestrator | 2025-05-13 23:50:19.317034 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-05-13 23:50:19.317046 | orchestrator | Tuesday 13 May 2025 23:48:26 +0000 (0:00:01.259) 0:00:03.351 *********** 2025-05-13 23:50:19.317058 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2025-05-13 23:50:19.317070 | orchestrator | 2025-05-13 23:50:19.317083 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-05-13 23:50:19.317095 | orchestrator | Tuesday 13 May 2025 23:48:36 +0000 (0:00:09.931) 0:00:13.283 *********** 2025-05-13 23:50:19.317108 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-05-13 23:50:19.317122 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-05-13 23:50:19.317135 | orchestrator | 2025-05-13 23:50:19.317147 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-05-13 23:50:19.317160 | orchestrator | Tuesday 13 May 2025 23:48:42 +0000 (0:00:06.110) 0:00:19.394 *********** 2025-05-13 23:50:19.317172 | orchestrator | changed: [testbed-manager] => (item=service) 2025-05-13 23:50:19.317185 | orchestrator | 2025-05-13 23:50:19.317197 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-05-13 23:50:19.317209 | orchestrator | Tuesday 13 May 2025 23:48:45 +0000 (0:00:03.022) 0:00:22.416 *********** 2025-05-13 23:50:19.317221 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-13 23:50:19.317234 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2025-05-13 23:50:19.317246 | orchestrator | 2025-05-13 23:50:19.317372 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-05-13 23:50:19.317401 | orchestrator | Tuesday 13 May 2025 23:48:49 +0000 (0:00:03.673) 0:00:26.090 *********** 2025-05-13 23:50:19.317413 | orchestrator | ok: [testbed-manager] => (item=admin) 2025-05-13 23:50:19.317485 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2025-05-13 23:50:19.317498 | orchestrator | 2025-05-13 23:50:19.317509 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-05-13 23:50:19.317520 | orchestrator | Tuesday 13 May 2025 23:48:55 +0000 (0:00:06.612) 0:00:32.703 *********** 2025-05-13 23:50:19.317601 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2025-05-13 23:50:19.317613 | orchestrator | 2025-05-13 23:50:19.317623 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 23:50:19.317634 | orchestrator | testbed-manager : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 23:50:19.317645 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 23:50:19.317656 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 23:50:19.317667 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 23:50:19.317677 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 23:50:19.317717 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 23:50:19.317730 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 23:50:19.317741 | orchestrator | 2025-05-13 23:50:19.317752 | orchestrator | 2025-05-13 23:50:19.317763 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 23:50:19.317774 | orchestrator | Tuesday 13 May 2025 23:49:01 +0000 (0:00:05.327) 0:00:38.030 *********** 2025-05-13 23:50:19.317785 | orchestrator | =============================================================================== 2025-05-13 23:50:19.317796 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 9.93s 2025-05-13 23:50:19.317807 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.61s 2025-05-13 23:50:19.317817 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.11s 2025-05-13 23:50:19.317828 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 5.33s 2025-05-13 23:50:19.317839 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.67s 2025-05-13 23:50:19.317850 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.02s 2025-05-13 23:50:19.317860 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.26s 2025-05-13 23:50:19.317871 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.93s 2025-05-13 23:50:19.317882 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.90s 2025-05-13 23:50:19.317892 | orchestrator | 2025-05-13 23:50:19.317903 | orchestrator | 2025-05-13 23:50:19.317914 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-13 23:50:19.317925 | orchestrator | 2025-05-13 23:50:19.317936 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-13 23:50:19.317947 | orchestrator | Tuesday 13 May 2025 23:47:09 +0000 (0:00:00.282) 0:00:00.282 *********** 2025-05-13 23:50:19.317957 | orchestrator | ok: [testbed-manager] 2025-05-13 23:50:19.317969 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:50:19.317980 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:50:19.317991 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:50:19.318002 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:50:19.318013 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:50:19.318057 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:50:19.318068 | orchestrator | 2025-05-13 23:50:19.318091 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-13 23:50:19.318113 | orchestrator | Tuesday 13 May 2025 23:47:10 +0000 (0:00:01.442) 0:00:01.725 *********** 2025-05-13 23:50:19.318200 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-05-13 23:50:19.318213 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-05-13 23:50:19.318223 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-05-13 23:50:19.318234 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-05-13 23:50:19.318276 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-05-13 23:50:19.318287 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-05-13 23:50:19.318298 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-05-13 23:50:19.318329 | orchestrator | 2025-05-13 23:50:19.318341 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-05-13 23:50:19.318352 | orchestrator | 2025-05-13 23:50:19.318362 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-05-13 23:50:19.318373 | orchestrator | Tuesday 13 May 2025 23:47:11 +0000 (0:00:00.947) 0:00:02.672 *********** 2025-05-13 23:50:19.318384 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:50:19.318396 | orchestrator | 2025-05-13 23:50:19.318406 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-05-13 23:50:19.318424 | orchestrator | Tuesday 13 May 2025 23:47:13 +0000 (0:00:01.608) 0:00:04.281 *********** 2025-05-13 23:50:19.318438 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-13 23:50:19.318454 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-13 23:50:19.318477 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-13 23:50:19.318491 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:50:19.318504 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:50:19.318523 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-13 23:50:19.318535 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-13 23:50:19.318552 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-13 23:50:19.318564 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-13 23:50:19.318576 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:50:19.318595 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-13 23:50:19.318608 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-13 23:50:19.318627 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:50:19.318638 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-13 23:50:19.318650 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:50:19.318662 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-13 23:50:19.318676 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-13 23:50:19.318697 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-13 23:50:19.318709 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-13 23:50:19.318720 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-13 23:50:19.318771 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-13 23:50:19.318784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:50:19.318800 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-13 23:50:19.318815 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:50:19.318842 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-13 23:50:19.318855 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:50:19.318874 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:50:19.318887 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-13 23:50:19.318898 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:50:19.318911 | orchestrator | 2025-05-13 23:50:19.318922 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-05-13 23:50:19.318932 | orchestrator | Tuesday 13 May 2025 23:47:16 +0000 (0:00:03.631) 0:00:07.913 *********** 2025-05-13 23:50:19.318943 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:50:19.318954 | orchestrator | 2025-05-13 23:50:19.318965 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-05-13 23:50:19.318975 | orchestrator | Tuesday 13 May 2025 23:47:18 +0000 (0:00:01.495) 0:00:09.409 *********** 2025-05-13 23:50:19.318992 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-13 23:50:19.319004 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-13 23:50:19.319023 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-13 23:50:19.319042 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-13 23:50:19.319054 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-13 23:50:19.319065 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-13 23:50:19.319077 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-13 23:50:19.319100 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-13 23:50:19.319112 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:50:19.319123 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:50:19.319141 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-13 23:50:19.319161 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:50:19.319172 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-13 23:50:19.319223 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-13 23:50:19.319237 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-13 23:50:19.319255 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:50:19.319267 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-13 23:50:19.319279 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:50:19.319339 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': 2025-05-13 23:50:19 | INFO  | Task 9499ae37-5b86-48b7-92cb-0b1b49901608 is in state SUCCESS 2025-05-13 23:50:19.319354 | orchestrator | True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:50:19.319367 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-13 23:50:19.319379 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-13 23:50:19.319391 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-13 23:50:19.319409 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-13 23:50:19.319421 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-13 23:50:19.319439 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-13 23:50:19.319457 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:50:19.319469 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:50:19.319480 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:50:19.319491 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:50:19.319502 | orchestrator | 2025-05-13 23:50:19.319513 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-05-13 23:50:19.319524 | orchestrator | Tuesday 13 May 2025 23:47:23 +0000 (0:00:05.247) 0:00:14.656 *********** 2025-05-13 23:50:19.319540 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-13 23:50:19.319552 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-13 23:50:19.319570 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-13 23:50:19.319597 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-13 23:50:19.319610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-13 23:50:19.319622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 23:50:19.319634 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 23:50:19.319650 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 23:50:19.319662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-13 23:50:19.319679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 23:50:19.319698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-13 23:50:19.319710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 23:50:19.319722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 23:50:19.319733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-13 23:50:19.319745 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 23:50:19.319756 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:50:19.319768 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:50:19.319779 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:50:19.319796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-13 23:50:19.319814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 23:50:19.319826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 23:50:19.319844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-13 23:50:19.319855 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 23:50:19.319888 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:50:19.319900 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-13 23:50:19.319912 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-13 23:50:19.319923 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-13 23:50:19.319941 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:50:19.319958 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-13 23:50:19.319970 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-13 23:50:19.319987 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-13 23:50:19.319999 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:50:19.320010 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-13 23:50:19.320022 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-13 23:50:19.320033 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-13 23:50:19.320044 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:50:19.320055 | orchestrator | 2025-05-13 23:50:19.320066 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-05-13 23:50:19.320077 | orchestrator | Tuesday 13 May 2025 23:47:24 +0000 (0:00:01.375) 0:00:16.032 *********** 2025-05-13 23:50:19.320088 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-13 23:50:19.320114 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-13 23:50:19.320126 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-13 23:50:19.320145 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-13 23:50:19.320157 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 23:50:19.320169 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-13 23:50:19.320180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 23:50:19.320198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 23:50:19.320214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-13 23:50:19.320226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 23:50:19.320237 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:50:19.320247 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:50:19.320265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-13 23:50:19.320276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 23:50:19.320288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 23:50:19.320319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-13 23:50:19.320338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 23:50:19.320350 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:50:19.320367 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-13 23:50:19.320378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 23:50:19.320390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 23:50:19.320408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-13 23:50:19.320419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 23:50:19.320430 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:50:19.320441 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-13 23:50:19.320453 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-13 23:50:19.320470 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-13 23:50:19.320482 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:50:19.320498 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-13 23:50:19.320509 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-13 23:50:19.320521 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-13 23:50:19.320533 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:50:19.320565 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-13 23:50:19.320577 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-13 23:50:19.320588 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-13 23:50:19.320605 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:50:19.320616 | orchestrator | 2025-05-13 23:50:19.320626 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-05-13 23:50:19.320637 | orchestrator | Tuesday 13 May 2025 23:47:26 +0000 (0:00:01.747) 0:00:17.779 *********** 2025-05-13 23:50:19.320649 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-13 23:50:19.320665 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-13 23:50:19.320677 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-13 23:50:19.320688 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-13 23:50:19.320712 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-13 23:50:19.320724 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-13 23:50:19.320742 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-13 23:50:19.320753 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:50:19.320765 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-13 23:50:19.320781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:50:19.320793 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-13 23:50:19.320805 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-13 23:50:19.320822 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:50:19.320834 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-13 23:50:19.320851 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-13 23:50:19.320863 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:50:19.320874 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:50:19.320890 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-13 23:50:19.320902 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-13 23:50:19.320921 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:50:19.320933 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-13 23:50:19.320951 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-13 23:50:19.320962 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-13 23:50:19.320974 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-13 23:50:19.320989 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-13 23:50:19.321001 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:50:19.321012 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:50:19.321030 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:50:19.321051 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:50:19.321062 | orchestrator | 2025-05-13 23:50:19.321073 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-05-13 23:50:19.321084 | orchestrator | Tuesday 13 May 2025 23:47:32 +0000 (0:00:05.602) 0:00:23.382 *********** 2025-05-13 23:50:19.321095 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-13 23:50:19.321106 | orchestrator | 2025-05-13 23:50:19.321116 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-05-13 23:50:19.321127 | orchestrator | Tuesday 13 May 2025 23:47:33 +0000 (0:00:01.291) 0:00:24.673 *********** 2025-05-13 23:50:19.321139 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088537, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.44434, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.321151 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088537, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.44434, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.321168 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088537, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.44434, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.321180 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088537, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.44434, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.321198 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1088526, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4363399, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.321216 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088537, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.44434, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-13 23:50:19.321227 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088537, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.44434, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.321238 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1088526, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4363399, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.321249 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1088526, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4363399, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.321265 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088537, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.44434, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.321277 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1088526, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4363399, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.321295 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1088504, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4283397, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.321328 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1088526, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4363399, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.321341 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1088504, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4283397, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.321352 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1088526, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4363399, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.321363 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1088504, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4283397, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.321380 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1088504, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4283397, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.321392 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1088526, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4363399, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-13 23:50:19.321416 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1088504, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4283397, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.321427 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1088506, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4293396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.321439 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1088506, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4293396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.321450 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1088506, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4293396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.321462 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1088504, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4283397, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.321478 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1088506, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4293396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.321490 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1088524, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4363399, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.321514 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1088506, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4293396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.321525 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1088524, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4363399, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.321537 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1088524, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4363399, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.321548 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1088524, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4363399, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.321560 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1088506, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4293396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.321575 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1088513, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4313397, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.321587 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1088504, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4283397, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-13 23:50:19.321615 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1088524, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4363399, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.321627 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1088513, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4313397, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.321638 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1088513, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4313397, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.321650 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1088513, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4313397, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.321661 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1088513, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4313397, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.321677 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1088522, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4353397, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.321689 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1088524, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4363399, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.321707 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1088522, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4353397, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.321745 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1088522, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4353397, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.321757 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1088522, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4353397, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.321768 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1088506, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4293396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-13 23:50:19.321780 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1088527, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4383397, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.321796 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1088522, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4353397, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.321814 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1088527, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4383397, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.321825 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1088513, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4313397, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.321848 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1088527, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4383397, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.321860 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1088534, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4393399, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.321872 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1088527, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4383397, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.321883 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1088534, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4393399, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.321903 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1088527, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4383397, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.321921 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1088571, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4463398, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.321933 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1088534, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4393399, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.321951 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1088522, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4353397, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.321962 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1088524, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4363399, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-13 23:50:19.321974 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1088532, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4383397, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.321986 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1088534, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4393399, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.322002 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1088571, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4463398, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.322081 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1088534, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4393399, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.322097 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1088571, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4463398, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.322116 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088510, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4303398, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.322127 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1088527, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4383397, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.322138 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1088571, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4463398, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.322150 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1088571, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4463398, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.322168 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1088534, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4393399, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.322185 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1088518, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4343398, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.322196 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1088532, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4383397, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.322214 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1088513, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4313397, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-13 23:50:19.322226 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1088532, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4383397, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.322237 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1088571, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4463398, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.322248 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1088532, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4383397, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.322266 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1088532, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4383397, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.322282 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1088532, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4383397, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.322294 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088510, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4303398, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.322340 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088500, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4273396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.322353 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088510, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4303398, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.322364 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088510, 'dev': 174, 'nlink': 1, '2025-05-13 23:50:19 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:50:19.322376 | orchestrator | 2025-05-13 23:50:19 | INFO  | Task 31c4c6fe-c6bf-49a6-89a1-cb843027ccb9 is in state STARTED 2025-05-13 23:50:19.322387 | orchestrator | 2025-05-13 23:50:19 | INFO  | Task 15721540-d0a5-4152-a6e7-334d62efbcaf is in state STARTED 2025-05-13 23:50:19.322397 | orchestrator | 2025-05-13 23:50:19 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:50:19.322417 | orchestrator | atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4303398, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.322429 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088510, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4303398, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.322446 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1088518, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4343398, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.322457 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088510, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4303398, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.322475 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1088522, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4353397, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-13 23:50:19.322487 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1088525, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4363399, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.322499 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088500, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4273396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.322510 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1088518, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4343398, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.322532 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1088518, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4343398, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.322549 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1088518, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4343398, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.322561 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1088525, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4363399, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.322578 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1088518, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4343398, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.322590 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1088569, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4463398, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.322601 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1088569, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4463398, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.322619 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088500, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4273396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.322630 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088500, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4273396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.322646 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088500, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4273396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.322658 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088500, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4273396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.322675 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1088527, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4383397, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-13 23:50:19.322687 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1088525, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4363399, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.322698 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1088525, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4363399, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.322716 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1088515, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4333398, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.322727 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1088515, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4333398, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.322746 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1088525, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4363399, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.322758 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1088525, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4363399, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.322775 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1088569, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4463398, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.322787 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1088569, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4463398, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.322799 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1088561, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.44534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.322817 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:50:19.322829 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1088561, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.44534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.322840 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:50:19.322851 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1088569, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4463398, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.322868 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1088515, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4333398, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.322880 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1088569, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4463398, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.322897 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1088534, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4393399, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-13 23:50:19.322909 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1088515, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4333398, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.322927 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1088515, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4333398, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.322939 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1088561, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.44534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.322950 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:50:19.322961 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1088561, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.44534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.322972 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:50:19.322988 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1088515, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4333398, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.322999 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1088561, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.44534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.323010 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:50:19.323028 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1088561, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.44534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 23:50:19.323040 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:50:19.323057 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1088571, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4463398, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-13 23:50:19.323069 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1088532, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4383397, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-13 23:50:19.323080 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088510, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4303398, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-13 23:50:19.323091 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1088518, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4343398, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-13 23:50:19.323107 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088500, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4273396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-13 23:50:19.323119 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1088525, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4363399, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-13 23:50:19.323137 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1088569, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4463398, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-13 23:50:19.323155 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1088515, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4333398, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-13 23:50:19.323167 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1088561, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.44534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-13 23:50:19.323178 | orchestrator | 2025-05-13 23:50:19.323189 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-05-13 23:50:19.323200 | orchestrator | Tuesday 13 May 2025 23:47:55 +0000 (0:00:21.719) 0:00:46.392 *********** 2025-05-13 23:50:19.323211 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-13 23:50:19.323223 | orchestrator | 2025-05-13 23:50:19.323234 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-05-13 23:50:19.323244 | orchestrator | Tuesday 13 May 2025 23:47:55 +0000 (0:00:00.677) 0:00:47.069 *********** 2025-05-13 23:50:19.323256 | orchestrator | [WARNING]: Skipped 2025-05-13 23:50:19.323267 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-13 23:50:19.323278 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-05-13 23:50:19.323289 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-13 23:50:19.323361 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-05-13 23:50:19.323376 | orchestrator | [WARNING]: Skipped 2025-05-13 23:50:19.323387 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-13 23:50:19.323398 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-05-13 23:50:19.323408 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-13 23:50:19.323419 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-05-13 23:50:19.323430 | orchestrator | [WARNING]: Skipped 2025-05-13 23:50:19.323441 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-13 23:50:19.323451 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-05-13 23:50:19.323462 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-13 23:50:19.323479 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-05-13 23:50:19.323490 | orchestrator | [WARNING]: Skipped 2025-05-13 23:50:19.323501 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-13 23:50:19.323512 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-05-13 23:50:19.323522 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-13 23:50:19.323533 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-05-13 23:50:19.323544 | orchestrator | [WARNING]: Skipped 2025-05-13 23:50:19.323554 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-13 23:50:19.323565 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-05-13 23:50:19.323582 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-13 23:50:19.323593 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-05-13 23:50:19.323604 | orchestrator | [WARNING]: Skipped 2025-05-13 23:50:19.323614 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-13 23:50:19.323625 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-05-13 23:50:19.323636 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-13 23:50:19.323646 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-05-13 23:50:19.323657 | orchestrator | [WARNING]: Skipped 2025-05-13 23:50:19.323667 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-13 23:50:19.323678 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-05-13 23:50:19.323689 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-13 23:50:19.323706 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-05-13 23:50:19.323716 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-13 23:50:19.323726 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-13 23:50:19.323735 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-13 23:50:19.323745 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-13 23:50:19.323754 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-13 23:50:19.323764 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-13 23:50:19.323773 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-13 23:50:19.323783 | orchestrator | 2025-05-13 23:50:19.323793 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-05-13 23:50:19.323802 | orchestrator | Tuesday 13 May 2025 23:47:57 +0000 (0:00:01.474) 0:00:48.543 *********** 2025-05-13 23:50:19.323812 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-13 23:50:19.323822 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-13 23:50:19.323832 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:50:19.323841 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:50:19.323851 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-13 23:50:19.323860 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:50:19.323870 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-13 23:50:19.323880 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:50:19.323889 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-13 23:50:19.323899 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:50:19.323909 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-13 23:50:19.323918 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:50:19.323927 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-05-13 23:50:19.323937 | orchestrator | 2025-05-13 23:50:19.323947 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-05-13 23:50:19.323956 | orchestrator | Tuesday 13 May 2025 23:48:13 +0000 (0:00:15.740) 0:01:04.284 *********** 2025-05-13 23:50:19.323966 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-13 23:50:19.323976 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:50:19.323985 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-13 23:50:19.323995 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-13 23:50:19.324004 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:50:19.324014 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:50:19.324023 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-13 23:50:19.324037 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:50:19.324047 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-13 23:50:19.324056 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:50:19.324065 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-13 23:50:19.324074 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:50:19.324084 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-05-13 23:50:19.324093 | orchestrator | 2025-05-13 23:50:19.324103 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-05-13 23:50:19.324112 | orchestrator | Tuesday 13 May 2025 23:48:15 +0000 (0:00:02.689) 0:01:06.973 *********** 2025-05-13 23:50:19.324127 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-13 23:50:19.324137 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-13 23:50:19.324147 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:50:19.324156 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:50:19.324166 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-13 23:50:19.324175 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:50:19.324185 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-05-13 23:50:19.324195 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-13 23:50:19.324204 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:50:19.324214 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-13 23:50:19.324223 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:50:19.324233 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-13 23:50:19.324242 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:50:19.324252 | orchestrator | 2025-05-13 23:50:19.324262 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-05-13 23:50:19.324277 | orchestrator | Tuesday 13 May 2025 23:48:17 +0000 (0:00:01.598) 0:01:08.571 *********** 2025-05-13 23:50:19.324286 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-13 23:50:19.324296 | orchestrator | 2025-05-13 23:50:19.324324 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-05-13 23:50:19.324334 | orchestrator | Tuesday 13 May 2025 23:48:18 +0000 (0:00:00.692) 0:01:09.264 *********** 2025-05-13 23:50:19.324343 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:50:19.324353 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:50:19.324362 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:50:19.324372 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:50:19.324381 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:50:19.324391 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:50:19.324400 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:50:19.324409 | orchestrator | 2025-05-13 23:50:19.324419 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-05-13 23:50:19.324428 | orchestrator | Tuesday 13 May 2025 23:48:18 +0000 (0:00:00.726) 0:01:09.990 *********** 2025-05-13 23:50:19.324438 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:50:19.324447 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:50:19.324457 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:50:19.324466 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:50:19.324482 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:50:19.324491 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:50:19.324500 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:50:19.324510 | orchestrator | 2025-05-13 23:50:19.324519 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-05-13 23:50:19.324529 | orchestrator | Tuesday 13 May 2025 23:48:21 +0000 (0:00:02.692) 0:01:12.683 *********** 2025-05-13 23:50:19.324539 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-13 23:50:19.324548 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-13 23:50:19.324558 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:50:19.324567 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-13 23:50:19.324577 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-13 23:50:19.324587 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-13 23:50:19.324596 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:50:19.324606 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:50:19.324615 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:50:19.324625 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:50:19.324634 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-13 23:50:19.324644 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:50:19.324653 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-13 23:50:19.324663 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:50:19.324672 | orchestrator | 2025-05-13 23:50:19.324682 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-05-13 23:50:19.324692 | orchestrator | Tuesday 13 May 2025 23:48:23 +0000 (0:00:01.989) 0:01:14.672 *********** 2025-05-13 23:50:19.324701 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-13 23:50:19.324711 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:50:19.324721 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-13 23:50:19.324730 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:50:19.324740 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-13 23:50:19.324749 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:50:19.324764 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-13 23:50:19.324773 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:50:19.324783 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-05-13 23:50:19.324793 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-13 23:50:19.324802 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:50:19.324812 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-13 23:50:19.324821 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:50:19.324831 | orchestrator | 2025-05-13 23:50:19.324840 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-05-13 23:50:19.324850 | orchestrator | Tuesday 13 May 2025 23:48:25 +0000 (0:00:01.615) 0:01:16.287 *********** 2025-05-13 23:50:19.324860 | orchestrator | [WARNING]: Skipped 2025-05-13 23:50:19.324869 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-05-13 23:50:19.324878 | orchestrator | due to this access issue: 2025-05-13 23:50:19.324888 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-05-13 23:50:19.324903 | orchestrator | not a directory 2025-05-13 23:50:19.324913 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-13 23:50:19.324923 | orchestrator | 2025-05-13 23:50:19.324932 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-05-13 23:50:19.324942 | orchestrator | Tuesday 13 May 2025 23:48:26 +0000 (0:00:00.976) 0:01:17.264 *********** 2025-05-13 23:50:19.324951 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:50:19.324961 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:50:19.324970 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:50:19.324980 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:50:19.324995 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:50:19.325005 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:50:19.325014 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:50:19.325024 | orchestrator | 2025-05-13 23:50:19.325033 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-05-13 23:50:19.325043 | orchestrator | Tuesday 13 May 2025 23:48:26 +0000 (0:00:00.763) 0:01:18.027 *********** 2025-05-13 23:50:19.325053 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:50:19.325062 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:50:19.325072 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:50:19.325082 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:50:19.325091 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:50:19.325101 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:50:19.325110 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:50:19.325120 | orchestrator | 2025-05-13 23:50:19.325129 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-05-13 23:50:19.325139 | orchestrator | Tuesday 13 May 2025 23:48:27 +0000 (0:00:00.690) 0:01:18.717 *********** 2025-05-13 23:50:19.325150 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-13 23:50:19.325160 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-13 23:50:19.325171 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-13 23:50:19.325189 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-13 23:50:19.325205 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-13 23:50:19.325216 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-13 23:50:19.325231 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-13 23:50:19.325241 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-13 23:50:19.325251 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:50:19.325262 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:50:19.325272 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:50:19.325287 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-13 23:50:19.325319 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-13 23:50:19.325330 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-13 23:50:19.325346 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-13 23:50:19.325358 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:50:19.325368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:50:19.325378 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:50:19.325388 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-13 23:50:19.325408 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-13 23:50:19.325418 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-13 23:50:19.325434 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-13 23:50:19.325447 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-13 23:50:19.325457 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-13 23:50:19.325467 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-13 23:50:19.325477 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:50:19.325498 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:50:19.325509 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:50:19.325519 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 23:50:19.325529 | orchestrator | 2025-05-13 23:50:19.325544 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-05-13 23:50:19.325554 | orchestrator | Tuesday 13 May 2025 23:48:31 +0000 (0:00:04.131) 0:01:22.849 *********** 2025-05-13 23:50:19.325564 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-05-13 23:50:19.325573 | orchestrator | skipping: [testbed-manager] 2025-05-13 23:50:19.325583 | orchestrator | 2025-05-13 23:50:19.325593 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-13 23:50:19.325602 | orchestrator | Tuesday 13 May 2025 23:48:32 +0000 (0:00:01.228) 0:01:24.077 *********** 2025-05-13 23:50:19.325611 | orchestrator | 2025-05-13 23:50:19.325621 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-13 23:50:19.325630 | orchestrator | Tuesday 13 May 2025 23:48:32 +0000 (0:00:00.070) 0:01:24.148 *********** 2025-05-13 23:50:19.325640 | orchestrator | 2025-05-13 23:50:19.325650 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-13 23:50:19.325659 | orchestrator | Tuesday 13 May 2025 23:48:32 +0000 (0:00:00.086) 0:01:24.234 *********** 2025-05-13 23:50:19.325669 | orchestrator | 2025-05-13 23:50:19.325678 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-13 23:50:19.325688 | orchestrator | Tuesday 13 May 2025 23:48:33 +0000 (0:00:00.275) 0:01:24.510 *********** 2025-05-13 23:50:19.325697 | orchestrator | 2025-05-13 23:50:19.325707 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-13 23:50:19.325716 | orchestrator | Tuesday 13 May 2025 23:48:33 +0000 (0:00:00.065) 0:01:24.575 *********** 2025-05-13 23:50:19.325725 | orchestrator | 2025-05-13 23:50:19.325735 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-13 23:50:19.325744 | orchestrator | Tuesday 13 May 2025 23:48:33 +0000 (0:00:00.060) 0:01:24.635 *********** 2025-05-13 23:50:19.325753 | orchestrator | 2025-05-13 23:50:19.325763 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-13 23:50:19.325772 | orchestrator | Tuesday 13 May 2025 23:48:33 +0000 (0:00:00.065) 0:01:24.701 *********** 2025-05-13 23:50:19.325787 | orchestrator | 2025-05-13 23:50:19.325797 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-05-13 23:50:19.325806 | orchestrator | Tuesday 13 May 2025 23:48:33 +0000 (0:00:00.085) 0:01:24.786 *********** 2025-05-13 23:50:19.325815 | orchestrator | changed: [testbed-manager] 2025-05-13 23:50:19.325825 | orchestrator | 2025-05-13 23:50:19.325835 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-05-13 23:50:19.325844 | orchestrator | Tuesday 13 May 2025 23:48:51 +0000 (0:00:18.112) 0:01:42.899 *********** 2025-05-13 23:50:19.325853 | orchestrator | changed: [testbed-manager] 2025-05-13 23:50:19.325863 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:50:19.325873 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:50:19.325882 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:50:19.325891 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:50:19.325901 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:50:19.325910 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:50:19.325919 | orchestrator | 2025-05-13 23:50:19.325929 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-05-13 23:50:19.325939 | orchestrator | Tuesday 13 May 2025 23:49:06 +0000 (0:00:14.650) 0:01:57.549 *********** 2025-05-13 23:50:19.325949 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:50:19.325958 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:50:19.325967 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:50:19.325977 | orchestrator | 2025-05-13 23:50:19.325986 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-05-13 23:50:19.325996 | orchestrator | Tuesday 13 May 2025 23:49:16 +0000 (0:00:10.404) 0:02:07.954 *********** 2025-05-13 23:50:19.326005 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:50:19.326040 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:50:19.326052 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:50:19.326062 | orchestrator | 2025-05-13 23:50:19.326071 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-05-13 23:50:19.326081 | orchestrator | Tuesday 13 May 2025 23:49:22 +0000 (0:00:06.201) 0:02:14.156 *********** 2025-05-13 23:50:19.326091 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:50:19.326105 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:50:19.326115 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:50:19.326124 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:50:19.326133 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:50:19.326143 | orchestrator | changed: [testbed-manager] 2025-05-13 23:50:19.326152 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:50:19.326161 | orchestrator | 2025-05-13 23:50:19.326171 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-05-13 23:50:19.326181 | orchestrator | Tuesday 13 May 2025 23:49:37 +0000 (0:00:15.086) 0:02:29.243 *********** 2025-05-13 23:50:19.326190 | orchestrator | changed: [testbed-manager] 2025-05-13 23:50:19.326199 | orchestrator | 2025-05-13 23:50:19.326209 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-05-13 23:50:19.326219 | orchestrator | Tuesday 13 May 2025 23:49:49 +0000 (0:00:11.833) 0:02:41.077 *********** 2025-05-13 23:50:19.326228 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:50:19.326238 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:50:19.326247 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:50:19.326256 | orchestrator | 2025-05-13 23:50:19.326266 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-05-13 23:50:19.326275 | orchestrator | Tuesday 13 May 2025 23:49:56 +0000 (0:00:06.974) 0:02:48.051 *********** 2025-05-13 23:50:19.326285 | orchestrator | changed: [testbed-manager] 2025-05-13 23:50:19.326294 | orchestrator | 2025-05-13 23:50:19.326352 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-05-13 23:50:19.326363 | orchestrator | Tuesday 13 May 2025 23:50:03 +0000 (0:00:06.657) 0:02:54.709 *********** 2025-05-13 23:50:19.326372 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:50:19.326389 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:50:19.326399 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:50:19.326409 | orchestrator | 2025-05-13 23:50:19.326418 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 23:50:19.326436 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-13 23:50:19.326446 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-05-13 23:50:19.326456 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-05-13 23:50:19.326465 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-05-13 23:50:19.326475 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-05-13 23:50:19.326485 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-05-13 23:50:19.326494 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-05-13 23:50:19.326503 | orchestrator | 2025-05-13 23:50:19.326513 | orchestrator | 2025-05-13 23:50:19.326522 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 23:50:19.326532 | orchestrator | Tuesday 13 May 2025 23:50:16 +0000 (0:00:13.433) 0:03:08.142 *********** 2025-05-13 23:50:19.326542 | orchestrator | =============================================================================== 2025-05-13 23:50:19.326551 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 21.72s 2025-05-13 23:50:19.326561 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 18.11s 2025-05-13 23:50:19.326570 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 15.74s 2025-05-13 23:50:19.326580 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 15.09s 2025-05-13 23:50:19.326589 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 14.65s 2025-05-13 23:50:19.326598 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 13.43s 2025-05-13 23:50:19.326608 | orchestrator | prometheus : Restart prometheus-alertmanager container ----------------- 11.83s 2025-05-13 23:50:19.326617 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 10.41s 2025-05-13 23:50:19.326626 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 6.97s 2025-05-13 23:50:19.326635 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 6.66s 2025-05-13 23:50:19.326645 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 6.20s 2025-05-13 23:50:19.326654 | orchestrator | prometheus : Copying over config.json files ----------------------------- 5.60s 2025-05-13 23:50:19.326663 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.25s 2025-05-13 23:50:19.326673 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.13s 2025-05-13 23:50:19.326682 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.63s 2025-05-13 23:50:19.326692 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.69s 2025-05-13 23:50:19.326701 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 2.69s 2025-05-13 23:50:19.326711 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 1.99s 2025-05-13 23:50:19.326725 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 1.75s 2025-05-13 23:50:19.326741 | orchestrator | prometheus : Copying config file for blackbox exporter ------------------ 1.62s 2025-05-13 23:50:22.371690 | orchestrator | 2025-05-13 23:50:22 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:50:22.376225 | orchestrator | 2025-05-13 23:50:22 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:50:22.379421 | orchestrator | 2025-05-13 23:50:22 | INFO  | Task 31c4c6fe-c6bf-49a6-89a1-cb843027ccb9 is in state STARTED 2025-05-13 23:50:22.382488 | orchestrator | 2025-05-13 23:50:22 | INFO  | Task 15721540-d0a5-4152-a6e7-334d62efbcaf is in state STARTED 2025-05-13 23:50:22.383078 | orchestrator | 2025-05-13 23:50:22 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:50:25.441208 | orchestrator | 2025-05-13 23:50:25 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:50:25.442395 | orchestrator | 2025-05-13 23:50:25 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:50:25.444212 | orchestrator | 2025-05-13 23:50:25 | INFO  | Task 31c4c6fe-c6bf-49a6-89a1-cb843027ccb9 is in state STARTED 2025-05-13 23:50:25.445631 | orchestrator | 2025-05-13 23:50:25 | INFO  | Task 15721540-d0a5-4152-a6e7-334d62efbcaf is in state STARTED 2025-05-13 23:50:25.445860 | orchestrator | 2025-05-13 23:50:25 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:50:28.501967 | orchestrator | 2025-05-13 23:50:28 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:50:28.502406 | orchestrator | 2025-05-13 23:50:28 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:50:28.503682 | orchestrator | 2025-05-13 23:50:28 | INFO  | Task 31c4c6fe-c6bf-49a6-89a1-cb843027ccb9 is in state STARTED 2025-05-13 23:50:28.506251 | orchestrator | 2025-05-13 23:50:28 | INFO  | Task 15721540-d0a5-4152-a6e7-334d62efbcaf is in state STARTED 2025-05-13 23:50:28.506339 | orchestrator | 2025-05-13 23:50:28 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:50:31.551918 | orchestrator | 2025-05-13 23:50:31 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:50:31.553269 | orchestrator | 2025-05-13 23:50:31 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:50:31.555789 | orchestrator | 2025-05-13 23:50:31 | INFO  | Task 31c4c6fe-c6bf-49a6-89a1-cb843027ccb9 is in state STARTED 2025-05-13 23:50:31.558972 | orchestrator | 2025-05-13 23:50:31 | INFO  | Task 15721540-d0a5-4152-a6e7-334d62efbcaf is in state STARTED 2025-05-13 23:50:31.558989 | orchestrator | 2025-05-13 23:50:31 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:50:34.614678 | orchestrator | 2025-05-13 23:50:34 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:50:34.618504 | orchestrator | 2025-05-13 23:50:34 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:50:34.620669 | orchestrator | 2025-05-13 23:50:34 | INFO  | Task 31c4c6fe-c6bf-49a6-89a1-cb843027ccb9 is in state STARTED 2025-05-13 23:50:34.622939 | orchestrator | 2025-05-13 23:50:34 | INFO  | Task 15721540-d0a5-4152-a6e7-334d62efbcaf is in state STARTED 2025-05-13 23:50:34.622994 | orchestrator | 2025-05-13 23:50:34 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:50:37.671193 | orchestrator | 2025-05-13 23:50:37 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:50:37.673752 | orchestrator | 2025-05-13 23:50:37 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:50:37.674317 | orchestrator | 2025-05-13 23:50:37 | INFO  | Task 31c4c6fe-c6bf-49a6-89a1-cb843027ccb9 is in state STARTED 2025-05-13 23:50:37.678238 | orchestrator | 2025-05-13 23:50:37 | INFO  | Task 15721540-d0a5-4152-a6e7-334d62efbcaf is in state STARTED 2025-05-13 23:50:37.678385 | orchestrator | 2025-05-13 23:50:37 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:50:40.730072 | orchestrator | 2025-05-13 23:50:40 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:50:40.732474 | orchestrator | 2025-05-13 23:50:40 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:50:40.736062 | orchestrator | 2025-05-13 23:50:40 | INFO  | Task 31c4c6fe-c6bf-49a6-89a1-cb843027ccb9 is in state STARTED 2025-05-13 23:50:40.738458 | orchestrator | 2025-05-13 23:50:40 | INFO  | Task 15721540-d0a5-4152-a6e7-334d62efbcaf is in state STARTED 2025-05-13 23:50:40.738995 | orchestrator | 2025-05-13 23:50:40 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:50:43.793019 | orchestrator | 2025-05-13 23:50:43 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:50:43.796334 | orchestrator | 2025-05-13 23:50:43 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:50:43.800632 | orchestrator | 2025-05-13 23:50:43 | INFO  | Task 31c4c6fe-c6bf-49a6-89a1-cb843027ccb9 is in state STARTED 2025-05-13 23:50:43.804714 | orchestrator | 2025-05-13 23:50:43 | INFO  | Task 15721540-d0a5-4152-a6e7-334d62efbcaf is in state STARTED 2025-05-13 23:50:43.804775 | orchestrator | 2025-05-13 23:50:43 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:50:46.869660 | orchestrator | 2025-05-13 23:50:46 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:50:46.877039 | orchestrator | 2025-05-13 23:50:46 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:50:46.878486 | orchestrator | 2025-05-13 23:50:46 | INFO  | Task 31c4c6fe-c6bf-49a6-89a1-cb843027ccb9 is in state STARTED 2025-05-13 23:50:46.881767 | orchestrator | 2025-05-13 23:50:46 | INFO  | Task 15721540-d0a5-4152-a6e7-334d62efbcaf is in state STARTED 2025-05-13 23:50:46.882014 | orchestrator | 2025-05-13 23:50:46 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:50:49.928894 | orchestrator | 2025-05-13 23:50:49 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:50:49.933714 | orchestrator | 2025-05-13 23:50:49 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:50:49.933776 | orchestrator | 2025-05-13 23:50:49 | INFO  | Task 31c4c6fe-c6bf-49a6-89a1-cb843027ccb9 is in state STARTED 2025-05-13 23:50:49.935324 | orchestrator | 2025-05-13 23:50:49 | INFO  | Task 15721540-d0a5-4152-a6e7-334d62efbcaf is in state STARTED 2025-05-13 23:50:49.935350 | orchestrator | 2025-05-13 23:50:49 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:50:52.973013 | orchestrator | 2025-05-13 23:50:52 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:50:52.973117 | orchestrator | 2025-05-13 23:50:52 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:50:52.973532 | orchestrator | 2025-05-13 23:50:52 | INFO  | Task 31c4c6fe-c6bf-49a6-89a1-cb843027ccb9 is in state STARTED 2025-05-13 23:50:52.975602 | orchestrator | 2025-05-13 23:50:52 | INFO  | Task 15721540-d0a5-4152-a6e7-334d62efbcaf is in state STARTED 2025-05-13 23:50:52.975671 | orchestrator | 2025-05-13 23:50:52 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:50:56.049782 | orchestrator | 2025-05-13 23:50:56 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:50:56.049918 | orchestrator | 2025-05-13 23:50:56 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:50:56.050175 | orchestrator | 2025-05-13 23:50:56 | INFO  | Task 31c4c6fe-c6bf-49a6-89a1-cb843027ccb9 is in state STARTED 2025-05-13 23:50:56.050914 | orchestrator | 2025-05-13 23:50:56 | INFO  | Task 15721540-d0a5-4152-a6e7-334d62efbcaf is in state STARTED 2025-05-13 23:50:56.050937 | orchestrator | 2025-05-13 23:50:56 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:50:59.080765 | orchestrator | 2025-05-13 23:50:59 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:50:59.080857 | orchestrator | 2025-05-13 23:50:59 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:50:59.080871 | orchestrator | 2025-05-13 23:50:59 | INFO  | Task 31c4c6fe-c6bf-49a6-89a1-cb843027ccb9 is in state STARTED 2025-05-13 23:50:59.081443 | orchestrator | 2025-05-13 23:50:59 | INFO  | Task 15721540-d0a5-4152-a6e7-334d62efbcaf is in state STARTED 2025-05-13 23:50:59.081471 | orchestrator | 2025-05-13 23:50:59 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:51:02.109648 | orchestrator | 2025-05-13 23:51:02 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:51:02.109745 | orchestrator | 2025-05-13 23:51:02 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:51:02.110146 | orchestrator | 2025-05-13 23:51:02 | INFO  | Task 31c4c6fe-c6bf-49a6-89a1-cb843027ccb9 is in state STARTED 2025-05-13 23:51:02.110887 | orchestrator | 2025-05-13 23:51:02 | INFO  | Task 15721540-d0a5-4152-a6e7-334d62efbcaf is in state STARTED 2025-05-13 23:51:02.110992 | orchestrator | 2025-05-13 23:51:02 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:51:05.143368 | orchestrator | 2025-05-13 23:51:05 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:51:05.144487 | orchestrator | 2025-05-13 23:51:05 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:51:05.145236 | orchestrator | 2025-05-13 23:51:05 | INFO  | Task 31c4c6fe-c6bf-49a6-89a1-cb843027ccb9 is in state STARTED 2025-05-13 23:51:05.145635 | orchestrator | 2025-05-13 23:51:05 | INFO  | Task 15721540-d0a5-4152-a6e7-334d62efbcaf is in state STARTED 2025-05-13 23:51:05.145667 | orchestrator | 2025-05-13 23:51:05 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:51:08.173015 | orchestrator | 2025-05-13 23:51:08 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:51:08.173182 | orchestrator | 2025-05-13 23:51:08 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:51:08.174590 | orchestrator | 2025-05-13 23:51:08 | INFO  | Task 31c4c6fe-c6bf-49a6-89a1-cb843027ccb9 is in state STARTED 2025-05-13 23:51:08.175274 | orchestrator | 2025-05-13 23:51:08 | INFO  | Task 15721540-d0a5-4152-a6e7-334d62efbcaf is in state STARTED 2025-05-13 23:51:08.175299 | orchestrator | 2025-05-13 23:51:08 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:51:11.207080 | orchestrator | 2025-05-13 23:51:11 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:51:11.212125 | orchestrator | 2025-05-13 23:51:11 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:51:11.213689 | orchestrator | 2025-05-13 23:51:11 | INFO  | Task 31c4c6fe-c6bf-49a6-89a1-cb843027ccb9 is in state STARTED 2025-05-13 23:51:11.214416 | orchestrator | 2025-05-13 23:51:11 | INFO  | Task 15721540-d0a5-4152-a6e7-334d62efbcaf is in state STARTED 2025-05-13 23:51:11.214499 | orchestrator | 2025-05-13 23:51:11 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:51:14.238455 | orchestrator | 2025-05-13 23:51:14 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:51:14.239690 | orchestrator | 2025-05-13 23:51:14 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:51:14.239739 | orchestrator | 2025-05-13 23:51:14 | INFO  | Task 31c4c6fe-c6bf-49a6-89a1-cb843027ccb9 is in state STARTED 2025-05-13 23:51:14.240369 | orchestrator | 2025-05-13 23:51:14 | INFO  | Task 15721540-d0a5-4152-a6e7-334d62efbcaf is in state STARTED 2025-05-13 23:51:14.240395 | orchestrator | 2025-05-13 23:51:14 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:51:17.268522 | orchestrator | 2025-05-13 23:51:17 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:51:17.270079 | orchestrator | 2025-05-13 23:51:17 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:51:17.270512 | orchestrator | 2025-05-13 23:51:17 | INFO  | Task 31c4c6fe-c6bf-49a6-89a1-cb843027ccb9 is in state STARTED 2025-05-13 23:51:17.271106 | orchestrator | 2025-05-13 23:51:17 | INFO  | Task 15721540-d0a5-4152-a6e7-334d62efbcaf is in state STARTED 2025-05-13 23:51:17.271129 | orchestrator | 2025-05-13 23:51:17 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:51:20.323932 | orchestrator | 2025-05-13 23:51:20 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:51:20.324245 | orchestrator | 2025-05-13 23:51:20 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:51:20.326668 | orchestrator | 2025-05-13 23:51:20 | INFO  | Task 31c4c6fe-c6bf-49a6-89a1-cb843027ccb9 is in state STARTED 2025-05-13 23:51:20.326791 | orchestrator | 2025-05-13 23:51:20 | INFO  | Task 15721540-d0a5-4152-a6e7-334d62efbcaf is in state STARTED 2025-05-13 23:51:20.326799 | orchestrator | 2025-05-13 23:51:20 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:51:23.381985 | orchestrator | 2025-05-13 23:51:23 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:51:23.382308 | orchestrator | 2025-05-13 23:51:23 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:51:23.383574 | orchestrator | 2025-05-13 23:51:23 | INFO  | Task 31c4c6fe-c6bf-49a6-89a1-cb843027ccb9 is in state STARTED 2025-05-13 23:51:23.387046 | orchestrator | 2025-05-13 23:51:23 | INFO  | Task 15721540-d0a5-4152-a6e7-334d62efbcaf is in state STARTED 2025-05-13 23:51:23.387079 | orchestrator | 2025-05-13 23:51:23 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:51:26.419472 | orchestrator | 2025-05-13 23:51:26 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:51:26.420327 | orchestrator | 2025-05-13 23:51:26 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:51:26.422622 | orchestrator | 2025-05-13 23:51:26 | INFO  | Task 31c4c6fe-c6bf-49a6-89a1-cb843027ccb9 is in state STARTED 2025-05-13 23:51:26.423195 | orchestrator | 2025-05-13 23:51:26 | INFO  | Task 15721540-d0a5-4152-a6e7-334d62efbcaf is in state STARTED 2025-05-13 23:51:26.425401 | orchestrator | 2025-05-13 23:51:26 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:51:29.465695 | orchestrator | 2025-05-13 23:51:29 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:51:29.466395 | orchestrator | 2025-05-13 23:51:29 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:51:29.470231 | orchestrator | 2025-05-13 23:51:29 | INFO  | Task 31c4c6fe-c6bf-49a6-89a1-cb843027ccb9 is in state SUCCESS 2025-05-13 23:51:29.473003 | orchestrator | 2025-05-13 23:51:29.473040 | orchestrator | 2025-05-13 23:51:29.473052 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-13 23:51:29.473064 | orchestrator | 2025-05-13 23:51:29.473075 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-13 23:51:29.473086 | orchestrator | Tuesday 13 May 2025 23:48:23 +0000 (0:00:00.233) 0:00:00.233 *********** 2025-05-13 23:51:29.473097 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:51:29.473110 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:51:29.473121 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:51:29.473132 | orchestrator | 2025-05-13 23:51:29.473179 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-13 23:51:29.473191 | orchestrator | Tuesday 13 May 2025 23:48:23 +0000 (0:00:00.410) 0:00:00.643 *********** 2025-05-13 23:51:29.473202 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-05-13 23:51:29.473214 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-05-13 23:51:29.473224 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-05-13 23:51:29.473235 | orchestrator | 2025-05-13 23:51:29.473246 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-05-13 23:51:29.473257 | orchestrator | 2025-05-13 23:51:29.473268 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-05-13 23:51:29.473279 | orchestrator | Tuesday 13 May 2025 23:48:24 +0000 (0:00:00.657) 0:00:01.300 *********** 2025-05-13 23:51:29.473290 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:51:29.473302 | orchestrator | 2025-05-13 23:51:29.473313 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-05-13 23:51:29.473324 | orchestrator | Tuesday 13 May 2025 23:48:25 +0000 (0:00:00.459) 0:00:01.760 *********** 2025-05-13 23:51:29.473334 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-05-13 23:51:29.473345 | orchestrator | 2025-05-13 23:51:29.473356 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-05-13 23:51:29.473367 | orchestrator | Tuesday 13 May 2025 23:48:40 +0000 (0:00:15.306) 0:00:17.066 *********** 2025-05-13 23:51:29.473378 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-05-13 23:51:29.473389 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-05-13 23:51:29.473400 | orchestrator | 2025-05-13 23:51:29.473410 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-05-13 23:51:29.473421 | orchestrator | Tuesday 13 May 2025 23:48:45 +0000 (0:00:05.560) 0:00:22.626 *********** 2025-05-13 23:51:29.473432 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-13 23:51:29.473445 | orchestrator | 2025-05-13 23:51:29.473455 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-05-13 23:51:29.473466 | orchestrator | Tuesday 13 May 2025 23:48:48 +0000 (0:00:02.744) 0:00:25.371 *********** 2025-05-13 23:51:29.473477 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-13 23:51:29.473487 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-05-13 23:51:29.473498 | orchestrator | 2025-05-13 23:51:29.473509 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-05-13 23:51:29.473519 | orchestrator | Tuesday 13 May 2025 23:48:52 +0000 (0:00:03.559) 0:00:28.931 *********** 2025-05-13 23:51:29.473530 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-13 23:51:29.473541 | orchestrator | 2025-05-13 23:51:29.473552 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-05-13 23:51:29.473562 | orchestrator | Tuesday 13 May 2025 23:48:55 +0000 (0:00:02.961) 0:00:31.892 *********** 2025-05-13 23:51:29.473572 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-05-13 23:51:29.473583 | orchestrator | 2025-05-13 23:51:29.473594 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-05-13 23:51:29.473622 | orchestrator | Tuesday 13 May 2025 23:48:59 +0000 (0:00:03.914) 0:00:35.807 *********** 2025-05-13 23:51:29.473671 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-13 23:51:29.473691 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-13 23:51:29.473712 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-13 23:51:29.473733 | orchestrator | 2025-05-13 23:51:29.473745 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-05-13 23:51:29.473757 | orchestrator | Tuesday 13 May 2025 23:49:03 +0000 (0:00:04.080) 0:00:39.887 *********** 2025-05-13 23:51:29.473770 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:51:29.473782 | orchestrator | 2025-05-13 23:51:29.473801 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-05-13 23:51:29.473813 | orchestrator | Tuesday 13 May 2025 23:49:03 +0000 (0:00:00.684) 0:00:40.572 *********** 2025-05-13 23:51:29.473826 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:51:29.473838 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:51:29.473850 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:51:29.473863 | orchestrator | 2025-05-13 23:51:29.473876 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-05-13 23:51:29.473888 | orchestrator | Tuesday 13 May 2025 23:49:07 +0000 (0:00:03.432) 0:00:44.004 *********** 2025-05-13 23:51:29.473900 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-13 23:51:29.473913 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-13 23:51:29.473925 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-13 23:51:29.473937 | orchestrator | 2025-05-13 23:51:29.473949 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-05-13 23:51:29.473961 | orchestrator | Tuesday 13 May 2025 23:49:09 +0000 (0:00:01.718) 0:00:45.723 *********** 2025-05-13 23:51:29.473973 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-13 23:51:29.473985 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-13 23:51:29.473997 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-13 23:51:29.474009 | orchestrator | 2025-05-13 23:51:29.474064 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-05-13 23:51:29.474084 | orchestrator | Tuesday 13 May 2025 23:49:10 +0000 (0:00:01.036) 0:00:46.760 *********** 2025-05-13 23:51:29.474190 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:51:29.474207 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:51:29.474218 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:51:29.474229 | orchestrator | 2025-05-13 23:51:29.474245 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-05-13 23:51:29.474264 | orchestrator | Tuesday 13 May 2025 23:49:10 +0000 (0:00:00.808) 0:00:47.569 *********** 2025-05-13 23:51:29.474296 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:51:29.474315 | orchestrator | 2025-05-13 23:51:29.474332 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-05-13 23:51:29.474347 | orchestrator | Tuesday 13 May 2025 23:49:10 +0000 (0:00:00.113) 0:00:47.682 *********** 2025-05-13 23:51:29.474358 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:51:29.474368 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:51:29.474379 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:51:29.474389 | orchestrator | 2025-05-13 23:51:29.474400 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-05-13 23:51:29.474410 | orchestrator | Tuesday 13 May 2025 23:49:11 +0000 (0:00:00.269) 0:00:47.952 *********** 2025-05-13 23:51:29.474421 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:51:29.474432 | orchestrator | 2025-05-13 23:51:29.474442 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-05-13 23:51:29.474453 | orchestrator | Tuesday 13 May 2025 23:49:11 +0000 (0:00:00.462) 0:00:48.414 *********** 2025-05-13 23:51:29.474482 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-13 23:51:29.474497 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-13 23:51:29.474522 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-13 23:51:29.474534 | orchestrator | 2025-05-13 23:51:29.474545 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-05-13 23:51:29.474556 | orchestrator | Tuesday 13 May 2025 23:49:14 +0000 (0:00:03.236) 0:00:51.650 *********** 2025-05-13 23:51:29.474576 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-13 23:51:29.474595 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:51:29.474607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-13 23:51:29.474619 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:51:29.474643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-13 23:51:29.474656 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:51:29.474667 | orchestrator | 2025-05-13 23:51:29.474677 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-05-13 23:51:29.474688 | orchestrator | Tuesday 13 May 2025 23:49:18 +0000 (0:00:03.943) 0:00:55.594 *********** 2025-05-13 23:51:29.474707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-13 23:51:29.474719 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:51:29.474742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-13 23:51:29.474754 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:51:29.474766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-13 23:51:29.474784 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:51:29.474795 | orchestrator | 2025-05-13 23:51:29.474806 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-05-13 23:51:29.474817 | orchestrator | Tuesday 13 May 2025 23:49:22 +0000 (0:00:03.585) 0:00:59.180 *********** 2025-05-13 23:51:29.474827 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:51:29.474838 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:51:29.474849 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:51:29.474859 | orchestrator | 2025-05-13 23:51:29.474870 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-05-13 23:51:29.474881 | orchestrator | Tuesday 13 May 2025 23:49:28 +0000 (0:00:06.351) 0:01:05.532 *********** 2025-05-13 23:51:29.474908 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-13 23:51:29.474922 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-13 23:51:29.474945 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-13 23:51:29.474957 | orchestrator | 2025-05-13 23:51:29.474968 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-05-13 23:51:29.474979 | orchestrator | Tuesday 13 May 2025 23:49:33 +0000 (0:00:04.442) 0:01:09.974 *********** 2025-05-13 23:51:29.474990 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:51:29.475000 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:51:29.475011 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:51:29.475022 | orchestrator | 2025-05-13 23:51:29.475032 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-05-13 23:51:29.475043 | orchestrator | Tuesday 13 May 2025 23:49:38 +0000 (0:00:05.503) 0:01:15.478 *********** 2025-05-13 23:51:29.475053 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:51:29.475064 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:51:29.475075 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:51:29.475092 | orchestrator | 2025-05-13 23:51:29.475103 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-05-13 23:51:29.475299 | orchestrator | Tuesday 13 May 2025 23:49:43 +0000 (0:00:04.981) 0:01:20.460 *********** 2025-05-13 23:51:29.475317 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:51:29.475328 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:51:29.475339 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:51:29.475350 | orchestrator | 2025-05-13 23:51:29.475361 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-05-13 23:51:29.475372 | orchestrator | Tuesday 13 May 2025 23:49:48 +0000 (0:00:04.359) 0:01:24.819 *********** 2025-05-13 23:51:29.475382 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:51:29.475393 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:51:29.475404 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:51:29.475415 | orchestrator | 2025-05-13 23:51:29.475426 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-05-13 23:51:29.475436 | orchestrator | Tuesday 13 May 2025 23:49:53 +0000 (0:00:05.683) 0:01:30.502 *********** 2025-05-13 23:51:29.475447 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:51:29.475458 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:51:29.475468 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:51:29.475479 | orchestrator | 2025-05-13 23:51:29.475490 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-05-13 23:51:29.475500 | orchestrator | Tuesday 13 May 2025 23:49:57 +0000 (0:00:03.657) 0:01:34.160 *********** 2025-05-13 23:51:29.475511 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:51:29.475521 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:51:29.475532 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:51:29.475543 | orchestrator | 2025-05-13 23:51:29.475553 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-05-13 23:51:29.475607 | orchestrator | Tuesday 13 May 2025 23:49:57 +0000 (0:00:00.521) 0:01:34.681 *********** 2025-05-13 23:51:29.475620 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-05-13 23:51:29.475631 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:51:29.475642 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-05-13 23:51:29.475653 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:51:29.475708 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-05-13 23:51:29.475729 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:51:29.475750 | orchestrator | 2025-05-13 23:51:29.475770 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-05-13 23:51:29.475791 | orchestrator | Tuesday 13 May 2025 23:50:07 +0000 (0:00:09.105) 0:01:43.787 *********** 2025-05-13 23:51:29.475813 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-13 23:51:29.475857 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-13 23:51:29.475905 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-13 23:51:29.475921 | orchestrator | 2025-05-13 23:51:29.475933 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-05-13 23:51:29.475952 | orchestrator | Tuesday 13 May 2025 23:50:11 +0000 (0:00:04.909) 0:01:48.696 *********** 2025-05-13 23:51:29.475964 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:51:29.475976 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:51:29.475988 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:51:29.476000 | orchestrator | 2025-05-13 23:51:29.476012 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-05-13 23:51:29.476028 | orchestrator | Tuesday 13 May 2025 23:50:12 +0000 (0:00:00.244) 0:01:48.940 *********** 2025-05-13 23:51:29.476040 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:51:29.476052 | orchestrator | 2025-05-13 23:51:29.476064 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-05-13 23:51:29.476076 | orchestrator | Tuesday 13 May 2025 23:50:14 +0000 (0:00:02.058) 0:01:50.999 *********** 2025-05-13 23:51:29.476088 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:51:29.476100 | orchestrator | 2025-05-13 23:51:29.476111 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-05-13 23:51:29.476123 | orchestrator | Tuesday 13 May 2025 23:50:16 +0000 (0:00:02.016) 0:01:53.015 *********** 2025-05-13 23:51:29.476135 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:51:29.476179 | orchestrator | 2025-05-13 23:51:29.476192 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-05-13 23:51:29.476203 | orchestrator | Tuesday 13 May 2025 23:50:18 +0000 (0:00:01.989) 0:01:55.004 *********** 2025-05-13 23:51:29.476215 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:51:29.476227 | orchestrator | 2025-05-13 23:51:29.476239 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-05-13 23:51:29.476250 | orchestrator | Tuesday 13 May 2025 23:50:47 +0000 (0:00:28.703) 0:02:23.708 *********** 2025-05-13 23:51:29.476260 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:51:29.476271 | orchestrator | 2025-05-13 23:51:29.476289 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-05-13 23:51:29.476300 | orchestrator | Tuesday 13 May 2025 23:50:49 +0000 (0:00:02.403) 0:02:26.111 *********** 2025-05-13 23:51:29.476311 | orchestrator | 2025-05-13 23:51:29.476321 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-05-13 23:51:29.476332 | orchestrator | Tuesday 13 May 2025 23:50:49 +0000 (0:00:00.063) 0:02:26.175 *********** 2025-05-13 23:51:29.476343 | orchestrator | 2025-05-13 23:51:29.476354 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-05-13 23:51:29.476364 | orchestrator | Tuesday 13 May 2025 23:50:49 +0000 (0:00:00.066) 0:02:26.241 *********** 2025-05-13 23:51:29.476375 | orchestrator | 2025-05-13 23:51:29.476386 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-05-13 23:51:29.476397 | orchestrator | Tuesday 13 May 2025 23:50:49 +0000 (0:00:00.064) 0:02:26.306 *********** 2025-05-13 23:51:29.476407 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:51:29.476418 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:51:29.476429 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:51:29.476439 | orchestrator | 2025-05-13 23:51:29.476450 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 23:51:29.476462 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-05-13 23:51:29.476474 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-05-13 23:51:29.476485 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-05-13 23:51:29.476496 | orchestrator | 2025-05-13 23:51:29.476507 | orchestrator | 2025-05-13 23:51:29.476517 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 23:51:29.476528 | orchestrator | Tuesday 13 May 2025 23:51:27 +0000 (0:00:38.192) 0:03:04.498 *********** 2025-05-13 23:51:29.476546 | orchestrator | =============================================================================== 2025-05-13 23:51:29.476557 | orchestrator | glance : Restart glance-api container ---------------------------------- 38.19s 2025-05-13 23:51:29.476568 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 28.70s 2025-05-13 23:51:29.476578 | orchestrator | service-ks-register : glance | Creating services ----------------------- 15.31s 2025-05-13 23:51:29.476589 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 9.11s 2025-05-13 23:51:29.476600 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 6.35s 2025-05-13 23:51:29.476610 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 5.68s 2025-05-13 23:51:29.476621 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 5.56s 2025-05-13 23:51:29.476631 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 5.50s 2025-05-13 23:51:29.476642 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 4.98s 2025-05-13 23:51:29.476653 | orchestrator | glance : Check glance containers ---------------------------------------- 4.91s 2025-05-13 23:51:29.476663 | orchestrator | glance : Copying over config.json files for services -------------------- 4.44s 2025-05-13 23:51:29.476674 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 4.36s 2025-05-13 23:51:29.476684 | orchestrator | glance : Ensuring config directories exist ------------------------------ 4.08s 2025-05-13 23:51:29.476695 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 3.94s 2025-05-13 23:51:29.476706 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 3.91s 2025-05-13 23:51:29.476716 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 3.66s 2025-05-13 23:51:29.476727 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.59s 2025-05-13 23:51:29.476738 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.56s 2025-05-13 23:51:29.476748 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.43s 2025-05-13 23:51:29.476759 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 3.24s 2025-05-13 23:51:29.476775 | orchestrator | 2025-05-13 23:51:29 | INFO  | Task 171418d9-41d8-4acf-8b85-76d7cdb64530 is in state STARTED 2025-05-13 23:51:29.476790 | orchestrator | 2025-05-13 23:51:29 | INFO  | Task 15721540-d0a5-4152-a6e7-334d62efbcaf is in state STARTED 2025-05-13 23:51:29.476802 | orchestrator | 2025-05-13 23:51:29 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:51:32.531567 | orchestrator | 2025-05-13 23:51:32 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:51:32.531780 | orchestrator | 2025-05-13 23:51:32 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:51:32.532742 | orchestrator | 2025-05-13 23:51:32 | INFO  | Task 171418d9-41d8-4acf-8b85-76d7cdb64530 is in state STARTED 2025-05-13 23:51:32.533444 | orchestrator | 2025-05-13 23:51:32 | INFO  | Task 15721540-d0a5-4152-a6e7-334d62efbcaf is in state STARTED 2025-05-13 23:51:32.533607 | orchestrator | 2025-05-13 23:51:32 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:51:35.580434 | orchestrator | 2025-05-13 23:51:35 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:51:35.581556 | orchestrator | 2025-05-13 23:51:35 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:51:35.584802 | orchestrator | 2025-05-13 23:51:35 | INFO  | Task 171418d9-41d8-4acf-8b85-76d7cdb64530 is in state STARTED 2025-05-13 23:51:35.586391 | orchestrator | 2025-05-13 23:51:35 | INFO  | Task 15721540-d0a5-4152-a6e7-334d62efbcaf is in state STARTED 2025-05-13 23:51:35.586449 | orchestrator | 2025-05-13 23:51:35 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:51:38.626944 | orchestrator | 2025-05-13 23:51:38 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:51:38.627081 | orchestrator | 2025-05-13 23:51:38 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:51:38.627360 | orchestrator | 2025-05-13 23:51:38 | INFO  | Task 171418d9-41d8-4acf-8b85-76d7cdb64530 is in state STARTED 2025-05-13 23:51:38.628080 | orchestrator | 2025-05-13 23:51:38 | INFO  | Task 15721540-d0a5-4152-a6e7-334d62efbcaf is in state STARTED 2025-05-13 23:51:38.628242 | orchestrator | 2025-05-13 23:51:38 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:51:41.669072 | orchestrator | 2025-05-13 23:51:41 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:51:41.670394 | orchestrator | 2025-05-13 23:51:41 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:51:41.672099 | orchestrator | 2025-05-13 23:51:41 | INFO  | Task 171418d9-41d8-4acf-8b85-76d7cdb64530 is in state STARTED 2025-05-13 23:51:41.673851 | orchestrator | 2025-05-13 23:51:41 | INFO  | Task 15721540-d0a5-4152-a6e7-334d62efbcaf is in state STARTED 2025-05-13 23:51:41.673881 | orchestrator | 2025-05-13 23:51:41 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:51:44.714638 | orchestrator | 2025-05-13 23:51:44 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:51:44.717159 | orchestrator | 2025-05-13 23:51:44 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:51:44.719246 | orchestrator | 2025-05-13 23:51:44 | INFO  | Task 171418d9-41d8-4acf-8b85-76d7cdb64530 is in state STARTED 2025-05-13 23:51:44.720877 | orchestrator | 2025-05-13 23:51:44 | INFO  | Task 15721540-d0a5-4152-a6e7-334d62efbcaf is in state STARTED 2025-05-13 23:51:44.720913 | orchestrator | 2025-05-13 23:51:44 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:51:47.753063 | orchestrator | 2025-05-13 23:51:47 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:51:47.754563 | orchestrator | 2025-05-13 23:51:47 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:51:47.754608 | orchestrator | 2025-05-13 23:51:47 | INFO  | Task 171418d9-41d8-4acf-8b85-76d7cdb64530 is in state STARTED 2025-05-13 23:51:47.759391 | orchestrator | 2025-05-13 23:51:47 | INFO  | Task 15721540-d0a5-4152-a6e7-334d62efbcaf is in state STARTED 2025-05-13 23:51:47.759431 | orchestrator | 2025-05-13 23:51:47 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:51:50.796747 | orchestrator | 2025-05-13 23:51:50 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:51:50.797933 | orchestrator | 2025-05-13 23:51:50 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:51:50.798665 | orchestrator | 2025-05-13 23:51:50 | INFO  | Task 171418d9-41d8-4acf-8b85-76d7cdb64530 is in state STARTED 2025-05-13 23:51:50.800299 | orchestrator | 2025-05-13 23:51:50 | INFO  | Task 15721540-d0a5-4152-a6e7-334d62efbcaf is in state STARTED 2025-05-13 23:51:50.800345 | orchestrator | 2025-05-13 23:51:50 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:51:53.848572 | orchestrator | 2025-05-13 23:51:53 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:51:53.849267 | orchestrator | 2025-05-13 23:51:53 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:51:53.850228 | orchestrator | 2025-05-13 23:51:53 | INFO  | Task 171418d9-41d8-4acf-8b85-76d7cdb64530 is in state STARTED 2025-05-13 23:51:53.850941 | orchestrator | 2025-05-13 23:51:53 | INFO  | Task 15721540-d0a5-4152-a6e7-334d62efbcaf is in state STARTED 2025-05-13 23:51:53.851040 | orchestrator | 2025-05-13 23:51:53 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:51:56.901065 | orchestrator | 2025-05-13 23:51:56 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:51:56.902866 | orchestrator | 2025-05-13 23:51:56 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:51:56.903769 | orchestrator | 2025-05-13 23:51:56 | INFO  | Task 171418d9-41d8-4acf-8b85-76d7cdb64530 is in state STARTED 2025-05-13 23:51:56.904924 | orchestrator | 2025-05-13 23:51:56 | INFO  | Task 15721540-d0a5-4152-a6e7-334d62efbcaf is in state STARTED 2025-05-13 23:51:56.904953 | orchestrator | 2025-05-13 23:51:56 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:51:59.959851 | orchestrator | 2025-05-13 23:51:59 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:51:59.960115 | orchestrator | 2025-05-13 23:51:59 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:51:59.962877 | orchestrator | 2025-05-13 23:51:59 | INFO  | Task 59459dea-fec7-43d5-96c9-7951de6850e5 is in state STARTED 2025-05-13 23:51:59.963518 | orchestrator | 2025-05-13 23:51:59 | INFO  | Task 171418d9-41d8-4acf-8b85-76d7cdb64530 is in state STARTED 2025-05-13 23:51:59.964456 | orchestrator | 2025-05-13 23:51:59 | INFO  | Task 15721540-d0a5-4152-a6e7-334d62efbcaf is in state STARTED 2025-05-13 23:51:59.964491 | orchestrator | 2025-05-13 23:51:59 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:52:02.998677 | orchestrator | 2025-05-13 23:52:02 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:52:03.007798 | orchestrator | 2025-05-13 23:52:03 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:52:03.008247 | orchestrator | 2025-05-13 23:52:03 | INFO  | Task 59459dea-fec7-43d5-96c9-7951de6850e5 is in state STARTED 2025-05-13 23:52:03.010211 | orchestrator | 2025-05-13 23:52:03 | INFO  | Task 171418d9-41d8-4acf-8b85-76d7cdb64530 is in state STARTED 2025-05-13 23:52:03.012422 | orchestrator | 2025-05-13 23:52:03 | INFO  | Task 15721540-d0a5-4152-a6e7-334d62efbcaf is in state STARTED 2025-05-13 23:52:03.012503 | orchestrator | 2025-05-13 23:52:03 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:52:06.060423 | orchestrator | 2025-05-13 23:52:06 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:52:06.065421 | orchestrator | 2025-05-13 23:52:06 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:52:06.068752 | orchestrator | 2025-05-13 23:52:06 | INFO  | Task 59459dea-fec7-43d5-96c9-7951de6850e5 is in state STARTED 2025-05-13 23:52:06.071913 | orchestrator | 2025-05-13 23:52:06 | INFO  | Task 171418d9-41d8-4acf-8b85-76d7cdb64530 is in state STARTED 2025-05-13 23:52:06.074807 | orchestrator | 2025-05-13 23:52:06 | INFO  | Task 15721540-d0a5-4152-a6e7-334d62efbcaf is in state STARTED 2025-05-13 23:52:06.076157 | orchestrator | 2025-05-13 23:52:06 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:52:09.117774 | orchestrator | 2025-05-13 23:52:09 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:52:09.118236 | orchestrator | 2025-05-13 23:52:09 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:52:09.119168 | orchestrator | 2025-05-13 23:52:09 | INFO  | Task 59459dea-fec7-43d5-96c9-7951de6850e5 is in state STARTED 2025-05-13 23:52:09.120592 | orchestrator | 2025-05-13 23:52:09 | INFO  | Task 171418d9-41d8-4acf-8b85-76d7cdb64530 is in state STARTED 2025-05-13 23:52:09.121364 | orchestrator | 2025-05-13 23:52:09 | INFO  | Task 15721540-d0a5-4152-a6e7-334d62efbcaf is in state STARTED 2025-05-13 23:52:09.121594 | orchestrator | 2025-05-13 23:52:09 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:52:12.170371 | orchestrator | 2025-05-13 23:52:12 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:52:12.170459 | orchestrator | 2025-05-13 23:52:12 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:52:12.170502 | orchestrator | 2025-05-13 23:52:12 | INFO  | Task 59459dea-fec7-43d5-96c9-7951de6850e5 is in state STARTED 2025-05-13 23:52:12.170507 | orchestrator | 2025-05-13 23:52:12 | INFO  | Task 171418d9-41d8-4acf-8b85-76d7cdb64530 is in state STARTED 2025-05-13 23:52:12.171886 | orchestrator | 2025-05-13 23:52:12 | INFO  | Task 15721540-d0a5-4152-a6e7-334d62efbcaf is in state STARTED 2025-05-13 23:52:12.171899 | orchestrator | 2025-05-13 23:52:12 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:52:15.231196 | orchestrator | 2025-05-13 23:52:15 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:52:15.231377 | orchestrator | 2025-05-13 23:52:15 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:52:15.231720 | orchestrator | 2025-05-13 23:52:15 | INFO  | Task 59459dea-fec7-43d5-96c9-7951de6850e5 is in state STARTED 2025-05-13 23:52:15.232219 | orchestrator | 2025-05-13 23:52:15 | INFO  | Task 171418d9-41d8-4acf-8b85-76d7cdb64530 is in state STARTED 2025-05-13 23:52:15.233197 | orchestrator | 2025-05-13 23:52:15 | INFO  | Task 15721540-d0a5-4152-a6e7-334d62efbcaf is in state STARTED 2025-05-13 23:52:15.233224 | orchestrator | 2025-05-13 23:52:15 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:52:18.270803 | orchestrator | 2025-05-13 23:52:18 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:52:18.271006 | orchestrator | 2025-05-13 23:52:18 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:52:18.271349 | orchestrator | 2025-05-13 23:52:18 | INFO  | Task 59459dea-fec7-43d5-96c9-7951de6850e5 is in state SUCCESS 2025-05-13 23:52:18.271954 | orchestrator | 2025-05-13 23:52:18 | INFO  | Task 171418d9-41d8-4acf-8b85-76d7cdb64530 is in state STARTED 2025-05-13 23:52:18.273939 | orchestrator | 2025-05-13 23:52:18 | INFO  | Task 15721540-d0a5-4152-a6e7-334d62efbcaf is in state STARTED 2025-05-13 23:52:18.274175 | orchestrator | 2025-05-13 23:52:18 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:52:21.302805 | orchestrator | 2025-05-13 23:52:21 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:52:21.303126 | orchestrator | 2025-05-13 23:52:21 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:52:21.303728 | orchestrator | 2025-05-13 23:52:21 | INFO  | Task 171418d9-41d8-4acf-8b85-76d7cdb64530 is in state STARTED 2025-05-13 23:52:21.304779 | orchestrator | 2025-05-13 23:52:21 | INFO  | Task 15721540-d0a5-4152-a6e7-334d62efbcaf is in state STARTED 2025-05-13 23:52:21.304804 | orchestrator | 2025-05-13 23:52:21 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:52:24.346487 | orchestrator | 2025-05-13 23:52:24 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:52:24.347151 | orchestrator | 2025-05-13 23:52:24 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:52:24.347284 | orchestrator | 2025-05-13 23:52:24 | INFO  | Task 171418d9-41d8-4acf-8b85-76d7cdb64530 is in state STARTED 2025-05-13 23:52:24.349818 | orchestrator | 2025-05-13 23:52:24 | INFO  | Task 15721540-d0a5-4152-a6e7-334d62efbcaf is in state STARTED 2025-05-13 23:52:24.349844 | orchestrator | 2025-05-13 23:52:24 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:52:27.380491 | orchestrator | 2025-05-13 23:52:27 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:52:27.381014 | orchestrator | 2025-05-13 23:52:27 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:52:27.382248 | orchestrator | 2025-05-13 23:52:27 | INFO  | Task 171418d9-41d8-4acf-8b85-76d7cdb64530 is in state STARTED 2025-05-13 23:52:27.384214 | orchestrator | 2025-05-13 23:52:27 | INFO  | Task 15721540-d0a5-4152-a6e7-334d62efbcaf is in state STARTED 2025-05-13 23:52:27.384267 | orchestrator | 2025-05-13 23:52:27 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:52:30.422351 | orchestrator | 2025-05-13 23:52:30 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:52:30.422851 | orchestrator | 2025-05-13 23:52:30 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:52:30.423556 | orchestrator | 2025-05-13 23:52:30 | INFO  | Task 171418d9-41d8-4acf-8b85-76d7cdb64530 is in state STARTED 2025-05-13 23:52:30.424313 | orchestrator | 2025-05-13 23:52:30 | INFO  | Task 15721540-d0a5-4152-a6e7-334d62efbcaf is in state STARTED 2025-05-13 23:52:30.424344 | orchestrator | 2025-05-13 23:52:30 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:52:33.458428 | orchestrator | 2025-05-13 23:52:33 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:52:33.458650 | orchestrator | 2025-05-13 23:52:33 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:52:33.459696 | orchestrator | 2025-05-13 23:52:33 | INFO  | Task 171418d9-41d8-4acf-8b85-76d7cdb64530 is in state STARTED 2025-05-13 23:52:33.460808 | orchestrator | 2025-05-13 23:52:33 | INFO  | Task 15721540-d0a5-4152-a6e7-334d62efbcaf is in state STARTED 2025-05-13 23:52:33.460876 | orchestrator | 2025-05-13 23:52:33 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:52:36.492705 | orchestrator | 2025-05-13 23:52:36 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:52:36.492856 | orchestrator | 2025-05-13 23:52:36 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:52:36.493596 | orchestrator | 2025-05-13 23:52:36 | INFO  | Task 171418d9-41d8-4acf-8b85-76d7cdb64530 is in state STARTED 2025-05-13 23:52:36.494298 | orchestrator | 2025-05-13 23:52:36 | INFO  | Task 15721540-d0a5-4152-a6e7-334d62efbcaf is in state STARTED 2025-05-13 23:52:36.494373 | orchestrator | 2025-05-13 23:52:36 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:52:39.534159 | orchestrator | 2025-05-13 23:52:39 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:52:39.534341 | orchestrator | 2025-05-13 23:52:39 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:52:39.534456 | orchestrator | 2025-05-13 23:52:39 | INFO  | Task 171418d9-41d8-4acf-8b85-76d7cdb64530 is in state STARTED 2025-05-13 23:52:39.535064 | orchestrator | 2025-05-13 23:52:39 | INFO  | Task 15721540-d0a5-4152-a6e7-334d62efbcaf is in state STARTED 2025-05-13 23:52:39.535091 | orchestrator | 2025-05-13 23:52:39 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:52:42.562899 | orchestrator | 2025-05-13 23:52:42 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:52:42.563090 | orchestrator | 2025-05-13 23:52:42 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:52:42.563557 | orchestrator | 2025-05-13 23:52:42 | INFO  | Task 171418d9-41d8-4acf-8b85-76d7cdb64530 is in state STARTED 2025-05-13 23:52:42.564247 | orchestrator | 2025-05-13 23:52:42 | INFO  | Task 15721540-d0a5-4152-a6e7-334d62efbcaf is in state STARTED 2025-05-13 23:52:42.564264 | orchestrator | 2025-05-13 23:52:42 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:52:45.593837 | orchestrator | 2025-05-13 23:52:45 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:52:45.593969 | orchestrator | 2025-05-13 23:52:45 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:52:45.594066 | orchestrator | 2025-05-13 23:52:45 | INFO  | Task 171418d9-41d8-4acf-8b85-76d7cdb64530 is in state STARTED 2025-05-13 23:52:45.594147 | orchestrator | 2025-05-13 23:52:45 | INFO  | Task 15721540-d0a5-4152-a6e7-334d62efbcaf is in state STARTED 2025-05-13 23:52:45.594158 | orchestrator | 2025-05-13 23:52:45 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:52:48.643061 | orchestrator | 2025-05-13 23:52:48 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:52:48.643369 | orchestrator | 2025-05-13 23:52:48 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:52:48.645751 | orchestrator | 2025-05-13 23:52:48 | INFO  | Task 171418d9-41d8-4acf-8b85-76d7cdb64530 is in state STARTED 2025-05-13 23:52:48.646333 | orchestrator | 2025-05-13 23:52:48 | INFO  | Task 15721540-d0a5-4152-a6e7-334d62efbcaf is in state STARTED 2025-05-13 23:52:48.646374 | orchestrator | 2025-05-13 23:52:48 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:52:51.677903 | orchestrator | 2025-05-13 23:52:51 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:52:51.679935 | orchestrator | 2025-05-13 23:52:51 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:52:51.681377 | orchestrator | 2025-05-13 23:52:51 | INFO  | Task 171418d9-41d8-4acf-8b85-76d7cdb64530 is in state STARTED 2025-05-13 23:52:51.682863 | orchestrator | 2025-05-13 23:52:51 | INFO  | Task 15721540-d0a5-4152-a6e7-334d62efbcaf is in state STARTED 2025-05-13 23:52:51.682928 | orchestrator | 2025-05-13 23:52:51 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:52:54.720394 | orchestrator | 2025-05-13 23:52:54 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:52:54.722156 | orchestrator | 2025-05-13 23:52:54 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:52:54.723729 | orchestrator | 2025-05-13 23:52:54 | INFO  | Task 171418d9-41d8-4acf-8b85-76d7cdb64530 is in state STARTED 2025-05-13 23:52:54.725121 | orchestrator | 2025-05-13 23:52:54 | INFO  | Task 15721540-d0a5-4152-a6e7-334d62efbcaf is in state STARTED 2025-05-13 23:52:54.725156 | orchestrator | 2025-05-13 23:52:54 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:52:57.782322 | orchestrator | 2025-05-13 23:52:57 | INFO  | Task f39010f6-7d7e-490b-86b3-c5bd2074fe64 is in state STARTED 2025-05-13 23:52:57.784233 | orchestrator | 2025-05-13 23:52:57 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:52:57.786296 | orchestrator | 2025-05-13 23:52:57 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:52:57.787863 | orchestrator | 2025-05-13 23:52:57 | INFO  | Task 171418d9-41d8-4acf-8b85-76d7cdb64530 is in state STARTED 2025-05-13 23:52:57.791552 | orchestrator | 2025-05-13 23:52:57 | INFO  | Task 15721540-d0a5-4152-a6e7-334d62efbcaf is in state SUCCESS 2025-05-13 23:52:57.791635 | orchestrator | 2025-05-13 23:52:57 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:52:57.793084 | orchestrator | 2025-05-13 23:52:57.793113 | orchestrator | None 2025-05-13 23:52:57.793121 | orchestrator | 2025-05-13 23:52:57.793128 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-13 23:52:57.793135 | orchestrator | 2025-05-13 23:52:57.793142 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-13 23:52:57.793149 | orchestrator | Tuesday 13 May 2025 23:48:50 +0000 (0:00:00.339) 0:00:00.339 *********** 2025-05-13 23:52:57.793155 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:52:57.793164 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:52:57.793171 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:52:57.793178 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:52:57.793184 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:52:57.793191 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:52:57.793197 | orchestrator | 2025-05-13 23:52:57.793204 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-13 23:52:57.793211 | orchestrator | Tuesday 13 May 2025 23:48:51 +0000 (0:00:00.835) 0:00:01.175 *********** 2025-05-13 23:52:57.793217 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-05-13 23:52:57.793225 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-05-13 23:52:57.793231 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-05-13 23:52:57.793238 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-05-13 23:52:57.793244 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-05-13 23:52:57.793250 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-05-13 23:52:57.793257 | orchestrator | 2025-05-13 23:52:57.793263 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-05-13 23:52:57.793270 | orchestrator | 2025-05-13 23:52:57.793276 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-13 23:52:57.793283 | orchestrator | Tuesday 13 May 2025 23:48:51 +0000 (0:00:00.677) 0:00:01.852 *********** 2025-05-13 23:52:57.793289 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:52:57.793298 | orchestrator | 2025-05-13 23:52:57.793304 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-05-13 23:52:57.793311 | orchestrator | Tuesday 13 May 2025 23:48:54 +0000 (0:00:03.017) 0:00:04.869 *********** 2025-05-13 23:52:57.793318 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-05-13 23:52:57.793324 | orchestrator | 2025-05-13 23:52:57.793331 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-05-13 23:52:57.793337 | orchestrator | Tuesday 13 May 2025 23:48:58 +0000 (0:00:03.157) 0:00:08.027 *********** 2025-05-13 23:52:57.793344 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-05-13 23:52:57.793351 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-05-13 23:52:57.793357 | orchestrator | 2025-05-13 23:52:57.793364 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-05-13 23:52:57.793370 | orchestrator | Tuesday 13 May 2025 23:49:04 +0000 (0:00:06.372) 0:00:14.399 *********** 2025-05-13 23:52:57.793377 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-13 23:52:57.793383 | orchestrator | 2025-05-13 23:52:57.793390 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-05-13 23:52:57.793397 | orchestrator | Tuesday 13 May 2025 23:49:07 +0000 (0:00:03.103) 0:00:17.503 *********** 2025-05-13 23:52:57.793403 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-13 23:52:57.793449 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-05-13 23:52:57.793471 | orchestrator | 2025-05-13 23:52:57.793541 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-05-13 23:52:57.793549 | orchestrator | Tuesday 13 May 2025 23:49:11 +0000 (0:00:03.604) 0:00:21.107 *********** 2025-05-13 23:52:57.793556 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-13 23:52:57.793567 | orchestrator | 2025-05-13 23:52:57.793578 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-05-13 23:52:57.793589 | orchestrator | Tuesday 13 May 2025 23:49:14 +0000 (0:00:03.079) 0:00:24.187 *********** 2025-05-13 23:52:57.793668 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-05-13 23:52:57.793682 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-05-13 23:52:57.793692 | orchestrator | 2025-05-13 23:52:57.793704 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-05-13 23:52:57.793714 | orchestrator | Tuesday 13 May 2025 23:49:22 +0000 (0:00:07.819) 0:00:32.006 *********** 2025-05-13 23:52:57.793745 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-13 23:52:57.793762 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-13 23:52:57.793776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-13 23:52:57.793789 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-13 23:52:57.793814 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-13 23:52:57.793823 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-13 23:52:57.793838 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-13 23:52:57.793847 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-13 23:52:57.793854 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-13 23:52:57.793871 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-13 23:52:57.793881 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-13 23:52:57.793889 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-13 23:52:57.793897 | orchestrator | 2025-05-13 23:52:57.793909 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-13 23:52:57.793916 | orchestrator | Tuesday 13 May 2025 23:49:24 +0000 (0:00:02.177) 0:00:34.184 *********** 2025-05-13 23:52:57.793922 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:52:57.793929 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:52:57.793936 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:52:57.793943 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:52:57.793949 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:52:57.793997 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:52:57.794005 | orchestrator | 2025-05-13 23:52:57.794011 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-13 23:52:57.794076 | orchestrator | Tuesday 13 May 2025 23:49:25 +0000 (0:00:01.508) 0:00:35.693 *********** 2025-05-13 23:52:57.794083 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:52:57.794089 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:52:57.794096 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:52:57.794103 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:52:57.794157 | orchestrator | 2025-05-13 23:52:57.794164 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-05-13 23:52:57.794171 | orchestrator | Tuesday 13 May 2025 23:49:27 +0000 (0:00:01.778) 0:00:37.472 *********** 2025-05-13 23:52:57.794177 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-05-13 23:52:57.794184 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-05-13 23:52:57.794191 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-05-13 23:52:57.794197 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-05-13 23:52:57.794210 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-05-13 23:52:57.794217 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-05-13 23:52:57.794223 | orchestrator | 2025-05-13 23:52:57.794230 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-05-13 23:52:57.794236 | orchestrator | Tuesday 13 May 2025 23:49:29 +0000 (0:00:01.916) 0:00:39.388 *********** 2025-05-13 23:52:57.794244 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-13 23:52:57.794257 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-13 23:52:57.794265 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-13 23:52:57.794279 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-13 23:52:57.794286 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-13 23:52:57.794298 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-13 23:52:57.794309 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-13 23:52:57.794316 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-13 23:52:57.794328 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-13 23:52:57.794340 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-13 23:52:57.794348 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-13 23:52:57.794358 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-13 23:52:57.794365 | orchestrator | 2025-05-13 23:52:57.794372 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-05-13 23:52:57.794379 | orchestrator | Tuesday 13 May 2025 23:49:33 +0000 (0:00:04.092) 0:00:43.481 *********** 2025-05-13 23:52:57.794385 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-05-13 23:52:57.794392 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-05-13 23:52:57.794399 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-05-13 23:52:57.794405 | orchestrator | 2025-05-13 23:52:57.794412 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-05-13 23:52:57.794418 | orchestrator | Tuesday 13 May 2025 23:49:35 +0000 (0:00:01.782) 0:00:45.264 *********** 2025-05-13 23:52:57.794425 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-05-13 23:52:57.794431 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-05-13 23:52:57.794438 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-05-13 23:52:57.794444 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-05-13 23:52:57.794450 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-05-13 23:52:57.794467 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-05-13 23:52:57.794474 | orchestrator | 2025-05-13 23:52:57.794480 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-05-13 23:52:57.794491 | orchestrator | Tuesday 13 May 2025 23:49:38 +0000 (0:00:02.823) 0:00:48.087 *********** 2025-05-13 23:52:57.794498 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-05-13 23:52:57.794505 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-05-13 23:52:57.794511 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-05-13 23:52:57.794518 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-05-13 23:52:57.794525 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-05-13 23:52:57.794531 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-05-13 23:52:57.794537 | orchestrator | 2025-05-13 23:52:57.794544 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-05-13 23:52:57.794550 | orchestrator | Tuesday 13 May 2025 23:49:39 +0000 (0:00:01.086) 0:00:49.174 *********** 2025-05-13 23:52:57.794557 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:52:57.794563 | orchestrator | 2025-05-13 23:52:57.794570 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-05-13 23:52:57.794576 | orchestrator | Tuesday 13 May 2025 23:49:39 +0000 (0:00:00.261) 0:00:49.435 *********** 2025-05-13 23:52:57.794583 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:52:57.794589 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:52:57.794595 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:52:57.794602 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:52:57.794608 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:52:57.794615 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:52:57.794621 | orchestrator | 2025-05-13 23:52:57.794628 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-13 23:52:57.794634 | orchestrator | Tuesday 13 May 2025 23:49:40 +0000 (0:00:01.270) 0:00:50.706 *********** 2025-05-13 23:52:57.794641 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:52:57.794648 | orchestrator | 2025-05-13 23:52:57.794655 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-05-13 23:52:57.794661 | orchestrator | Tuesday 13 May 2025 23:49:42 +0000 (0:00:01.728) 0:00:52.435 *********** 2025-05-13 23:52:57.794669 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-13 23:52:57.794683 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-13 23:52:57.794707 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-13 23:52:57.794714 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-13 23:52:57.794722 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-13 23:52:57.794729 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-13 23:52:57.794740 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-13 23:52:57.794747 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-13 23:52:57.794765 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-13 23:52:57.794772 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-13 23:52:57.794779 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-13 23:52:57.794786 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-13 23:52:57.794793 | orchestrator | 2025-05-13 23:52:57.794803 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-05-13 23:52:57.794810 | orchestrator | Tuesday 13 May 2025 23:49:46 +0000 (0:00:03.890) 0:00:56.325 *********** 2025-05-13 23:52:57.794817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-13 23:52:57.794833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-13 23:52:57.794840 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:52:57.794847 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-13 23:52:57.794854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-13 23:52:57.794861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-13 23:52:57.794872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-13 23:52:57.794884 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:52:57.794891 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:52:57.794898 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-13 23:52:57.794909 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-13 23:52:57.794916 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:52:57.794923 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-13 23:52:57.794930 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-13 23:52:57.794937 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:52:57.794947 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-13 23:52:57.794985 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-13 23:52:57.794993 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:52:57.794999 | orchestrator | 2025-05-13 23:52:57.795006 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-05-13 23:52:57.795013 | orchestrator | Tuesday 13 May 2025 23:49:47 +0000 (0:00:01.544) 0:00:57.869 *********** 2025-05-13 23:52:57.795024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-13 23:52:57.795031 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-13 23:52:57.795038 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:52:57.795045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-13 23:52:57.795055 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-13 23:52:57.795069 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:52:57.795076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-13 23:52:57.795088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-13 23:52:57.795095 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:52:57.795102 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-13 23:52:57.795109 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-13 23:52:57.795116 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:52:57.795126 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-13 23:52:57.795141 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-13 23:52:57.795148 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:52:57.795160 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-13 23:52:57.795167 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-13 23:52:57.795174 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:52:57.795181 | orchestrator | 2025-05-13 23:52:57.795187 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-05-13 23:52:57.795194 | orchestrator | Tuesday 13 May 2025 23:49:50 +0000 (0:00:02.552) 0:01:00.422 *********** 2025-05-13 23:52:57.795201 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-13 23:52:57.795219 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-13 23:52:57.795227 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-13 23:52:57.795240 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-13 23:52:57.795247 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-13 23:52:57.795254 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-13 23:52:57.795269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-13 23:52:57.795276 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-13 23:52:57.795283 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-13 23:52:57.795294 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-13 23:52:57.795301 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-13 23:52:57.795308 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-13 23:52:57.795322 | orchestrator | 2025-05-13 23:52:57.795329 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-05-13 23:52:57.795336 | orchestrator | Tuesday 13 May 2025 23:49:54 +0000 (0:00:04.356) 0:01:04.778 *********** 2025-05-13 23:52:57.795342 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-05-13 23:52:57.795349 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:52:57.795356 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-05-13 23:52:57.795363 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:52:57.795372 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-05-13 23:52:57.795379 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-05-13 23:52:57.795386 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:52:57.795392 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-05-13 23:52:57.795399 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-05-13 23:52:57.795405 | orchestrator | 2025-05-13 23:52:57.795412 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-05-13 23:52:57.795418 | orchestrator | Tuesday 13 May 2025 23:49:57 +0000 (0:00:02.216) 0:01:06.994 *********** 2025-05-13 23:52:57.795425 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-13 23:52:57.795436 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-13 23:52:57.795444 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-13 23:52:57.795456 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-13 23:52:57.795467 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-13 23:52:57.795479 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-13 23:52:57.795486 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-13 23:52:57.795494 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-13 23:52:57.795506 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-13 23:52:57.795513 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-13 23:52:57.795523 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-13 23:52:57.795530 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-13 23:52:57.795537 | orchestrator | 2025-05-13 23:52:57.795544 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-05-13 23:52:57.795551 | orchestrator | Tuesday 13 May 2025 23:50:10 +0000 (0:00:12.984) 0:01:19.978 *********** 2025-05-13 23:52:57.795561 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:52:57.795568 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:52:57.795575 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:52:57.795581 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:52:57.795588 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:52:57.795594 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:52:57.795600 | orchestrator | 2025-05-13 23:52:57.795607 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-05-13 23:52:57.795614 | orchestrator | Tuesday 13 May 2025 23:50:12 +0000 (0:00:02.261) 0:01:22.240 *********** 2025-05-13 23:52:57.795666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-13 23:52:57.795674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-13 23:52:57.795681 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:52:57.795691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-13 23:52:57.795699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-13 23:52:57.795706 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:52:57.795718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-13 23:52:57.795730 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-13 23:52:57.795737 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:52:57.795744 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-13 23:52:57.795752 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-13 23:52:57.795758 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:52:57.795772 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-13 23:52:57.795785 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-13 23:52:57.795793 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:52:57.795809 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-13 23:52:57.795816 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-13 23:52:57.795824 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:52:57.795831 | orchestrator | 2025-05-13 23:52:57.795837 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-05-13 23:52:57.795844 | orchestrator | Tuesday 13 May 2025 23:50:13 +0000 (0:00:00.841) 0:01:23.081 *********** 2025-05-13 23:52:57.795851 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:52:57.795857 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:52:57.795864 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:52:57.795870 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:52:57.795877 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:52:57.795883 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:52:57.795890 | orchestrator | 2025-05-13 23:52:57.795896 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-05-13 23:52:57.795903 | orchestrator | Tuesday 13 May 2025 23:50:13 +0000 (0:00:00.621) 0:01:23.703 *********** 2025-05-13 23:52:57.795913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-13 23:52:57.795920 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-13 23:52:57.795949 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-13 23:52:57.795976 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-13 23:52:57.795984 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-13 23:52:57.795995 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-13 23:52:57.796002 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-13 23:52:57.796018 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-13 23:52:57.796025 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-13 23:52:57.796032 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-13 23:52:57.796040 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-13 23:52:57.796050 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-13 23:52:57.796059 | orchestrator | 2025-05-13 23:52:57.796071 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-13 23:52:57.796084 | orchestrator | Tuesday 13 May 2025 23:50:15 +0000 (0:00:02.001) 0:01:25.705 *********** 2025-05-13 23:52:57.796103 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:52:57.796115 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:52:57.796126 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:52:57.796140 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:52:57.796157 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:52:57.796168 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:52:57.796181 | orchestrator | 2025-05-13 23:52:57.796193 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-05-13 23:52:57.796208 | orchestrator | Tuesday 13 May 2025 23:50:16 +0000 (0:00:00.754) 0:01:26.459 *********** 2025-05-13 23:52:57.796220 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:52:57.796230 | orchestrator | 2025-05-13 23:52:57.796236 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-05-13 23:52:57.796243 | orchestrator | Tuesday 13 May 2025 23:50:18 +0000 (0:00:01.926) 0:01:28.386 *********** 2025-05-13 23:52:57.796249 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:52:57.796256 | orchestrator | 2025-05-13 23:52:57.796262 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-05-13 23:52:57.796268 | orchestrator | Tuesday 13 May 2025 23:50:20 +0000 (0:00:02.023) 0:01:30.409 *********** 2025-05-13 23:52:57.796275 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:52:57.796281 | orchestrator | 2025-05-13 23:52:57.796288 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-13 23:52:57.796294 | orchestrator | Tuesday 13 May 2025 23:50:40 +0000 (0:00:20.332) 0:01:50.741 *********** 2025-05-13 23:52:57.796301 | orchestrator | 2025-05-13 23:52:57.796344 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-13 23:52:57.796351 | orchestrator | Tuesday 13 May 2025 23:50:40 +0000 (0:00:00.075) 0:01:50.817 *********** 2025-05-13 23:52:57.796358 | orchestrator | 2025-05-13 23:52:57.796365 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-13 23:52:57.796372 | orchestrator | Tuesday 13 May 2025 23:50:40 +0000 (0:00:00.073) 0:01:50.890 *********** 2025-05-13 23:52:57.796378 | orchestrator | 2025-05-13 23:52:57.796384 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-13 23:52:57.796391 | orchestrator | Tuesday 13 May 2025 23:50:40 +0000 (0:00:00.064) 0:01:50.955 *********** 2025-05-13 23:52:57.796397 | orchestrator | 2025-05-13 23:52:57.796404 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-13 23:52:57.796410 | orchestrator | Tuesday 13 May 2025 23:50:41 +0000 (0:00:00.072) 0:01:51.028 *********** 2025-05-13 23:52:57.796417 | orchestrator | 2025-05-13 23:52:57.796423 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-13 23:52:57.796430 | orchestrator | Tuesday 13 May 2025 23:50:41 +0000 (0:00:00.066) 0:01:51.094 *********** 2025-05-13 23:52:57.796436 | orchestrator | 2025-05-13 23:52:57.796443 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-05-13 23:52:57.796449 | orchestrator | Tuesday 13 May 2025 23:50:41 +0000 (0:00:00.071) 0:01:51.165 *********** 2025-05-13 23:52:57.796456 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:52:57.796462 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:52:57.796469 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:52:57.796475 | orchestrator | 2025-05-13 23:52:57.796482 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-05-13 23:52:57.796488 | orchestrator | Tuesday 13 May 2025 23:51:08 +0000 (0:00:27.786) 0:02:18.952 *********** 2025-05-13 23:52:57.796495 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:52:57.796501 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:52:57.796508 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:52:57.796514 | orchestrator | 2025-05-13 23:52:57.796521 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-05-13 23:52:57.796527 | orchestrator | Tuesday 13 May 2025 23:51:21 +0000 (0:00:12.205) 0:02:31.157 *********** 2025-05-13 23:52:57.796534 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:52:57.796540 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:52:57.796553 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:52:57.796560 | orchestrator | 2025-05-13 23:52:57.796567 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-05-13 23:52:57.796573 | orchestrator | Tuesday 13 May 2025 23:52:46 +0000 (0:01:25.265) 0:03:56.422 *********** 2025-05-13 23:52:57.796580 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:52:57.796586 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:52:57.796593 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:52:57.796600 | orchestrator | 2025-05-13 23:52:57.796606 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-05-13 23:52:57.796613 | orchestrator | Tuesday 13 May 2025 23:52:55 +0000 (0:00:08.956) 0:04:05.378 *********** 2025-05-13 23:52:57.796619 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:52:57.796626 | orchestrator | 2025-05-13 23:52:57.796632 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 23:52:57.796639 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-05-13 23:52:57.796647 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-05-13 23:52:57.796658 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-05-13 23:52:57.796665 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-13 23:52:57.796671 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-13 23:52:57.796678 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-13 23:52:57.796684 | orchestrator | 2025-05-13 23:52:57.796691 | orchestrator | 2025-05-13 23:52:57.796698 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 23:52:57.796704 | orchestrator | Tuesday 13 May 2025 23:52:56 +0000 (0:00:00.635) 0:04:06.014 *********** 2025-05-13 23:52:57.796711 | orchestrator | =============================================================================== 2025-05-13 23:52:57.796717 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 85.27s 2025-05-13 23:52:57.796724 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 27.79s 2025-05-13 23:52:57.796730 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 20.33s 2025-05-13 23:52:57.796736 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 12.98s 2025-05-13 23:52:57.796743 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 12.21s 2025-05-13 23:52:57.796749 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 8.96s 2025-05-13 23:52:57.796756 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 7.82s 2025-05-13 23:52:57.796762 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.37s 2025-05-13 23:52:57.796772 | orchestrator | cinder : Copying over config.json files for services -------------------- 4.36s 2025-05-13 23:52:57.796779 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 4.09s 2025-05-13 23:52:57.796786 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 3.89s 2025-05-13 23:52:57.796792 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.60s 2025-05-13 23:52:57.796799 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.16s 2025-05-13 23:52:57.796805 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.10s 2025-05-13 23:52:57.796812 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.08s 2025-05-13 23:52:57.796823 | orchestrator | cinder : include_tasks -------------------------------------------------- 3.02s 2025-05-13 23:52:57.796829 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 2.82s 2025-05-13 23:52:57.796836 | orchestrator | service-cert-copy : cinder | Copying over backend internal TLS key ------ 2.55s 2025-05-13 23:52:57.796842 | orchestrator | cinder : Generating 'hostnqn' file for cinder_volume -------------------- 2.26s 2025-05-13 23:52:57.796849 | orchestrator | cinder : Copying over cinder-wsgi.conf ---------------------------------- 2.22s 2025-05-13 23:53:00.856329 | orchestrator | 2025-05-13 23:53:00 | INFO  | Task f39010f6-7d7e-490b-86b3-c5bd2074fe64 is in state STARTED 2025-05-13 23:53:00.858497 | orchestrator | 2025-05-13 23:53:00 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:53:00.861887 | orchestrator | 2025-05-13 23:53:00 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:53:00.863873 | orchestrator | 2025-05-13 23:53:00 | INFO  | Task 171418d9-41d8-4acf-8b85-76d7cdb64530 is in state STARTED 2025-05-13 23:53:00.864459 | orchestrator | 2025-05-13 23:53:00 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:53:03.907827 | orchestrator | 2025-05-13 23:53:03 | INFO  | Task f39010f6-7d7e-490b-86b3-c5bd2074fe64 is in state STARTED 2025-05-13 23:53:03.908001 | orchestrator | 2025-05-13 23:53:03 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:53:03.908159 | orchestrator | 2025-05-13 23:53:03 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:53:03.909072 | orchestrator | 2025-05-13 23:53:03 | INFO  | Task 171418d9-41d8-4acf-8b85-76d7cdb64530 is in state STARTED 2025-05-13 23:53:03.909109 | orchestrator | 2025-05-13 23:53:03 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:53:06.949876 | orchestrator | 2025-05-13 23:53:06 | INFO  | Task f39010f6-7d7e-490b-86b3-c5bd2074fe64 is in state STARTED 2025-05-13 23:53:06.952181 | orchestrator | 2025-05-13 23:53:06 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:53:06.953859 | orchestrator | 2025-05-13 23:53:06 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:53:06.956205 | orchestrator | 2025-05-13 23:53:06 | INFO  | Task 171418d9-41d8-4acf-8b85-76d7cdb64530 is in state STARTED 2025-05-13 23:53:06.956282 | orchestrator | 2025-05-13 23:53:06 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:53:10.029665 | orchestrator | 2025-05-13 23:53:10 | INFO  | Task f39010f6-7d7e-490b-86b3-c5bd2074fe64 is in state STARTED 2025-05-13 23:53:10.031710 | orchestrator | 2025-05-13 23:53:10 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:53:10.035141 | orchestrator | 2025-05-13 23:53:10 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:53:10.037555 | orchestrator | 2025-05-13 23:53:10 | INFO  | Task 171418d9-41d8-4acf-8b85-76d7cdb64530 is in state STARTED 2025-05-13 23:53:10.038091 | orchestrator | 2025-05-13 23:53:10 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:53:13.080286 | orchestrator | 2025-05-13 23:53:13 | INFO  | Task f39010f6-7d7e-490b-86b3-c5bd2074fe64 is in state STARTED 2025-05-13 23:53:13.081606 | orchestrator | 2025-05-13 23:53:13 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:53:13.082393 | orchestrator | 2025-05-13 23:53:13 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:53:13.083539 | orchestrator | 2025-05-13 23:53:13 | INFO  | Task 171418d9-41d8-4acf-8b85-76d7cdb64530 is in state STARTED 2025-05-13 23:53:13.083612 | orchestrator | 2025-05-13 23:53:13 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:53:16.129746 | orchestrator | 2025-05-13 23:53:16 | INFO  | Task f39010f6-7d7e-490b-86b3-c5bd2074fe64 is in state STARTED 2025-05-13 23:53:16.131053 | orchestrator | 2025-05-13 23:53:16 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:53:16.132396 | orchestrator | 2025-05-13 23:53:16 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:53:16.137517 | orchestrator | 2025-05-13 23:53:16 | INFO  | Task 171418d9-41d8-4acf-8b85-76d7cdb64530 is in state STARTED 2025-05-13 23:53:16.137572 | orchestrator | 2025-05-13 23:53:16 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:53:19.183609 | orchestrator | 2025-05-13 23:53:19 | INFO  | Task f39010f6-7d7e-490b-86b3-c5bd2074fe64 is in state STARTED 2025-05-13 23:53:19.185221 | orchestrator | 2025-05-13 23:53:19 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:53:19.186577 | orchestrator | 2025-05-13 23:53:19 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:53:19.188361 | orchestrator | 2025-05-13 23:53:19 | INFO  | Task 171418d9-41d8-4acf-8b85-76d7cdb64530 is in state STARTED 2025-05-13 23:53:19.188399 | orchestrator | 2025-05-13 23:53:19 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:53:22.241997 | orchestrator | 2025-05-13 23:53:22 | INFO  | Task f39010f6-7d7e-490b-86b3-c5bd2074fe64 is in state STARTED 2025-05-13 23:53:22.242224 | orchestrator | 2025-05-13 23:53:22 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:53:22.243114 | orchestrator | 2025-05-13 23:53:22 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:53:22.245036 | orchestrator | 2025-05-13 23:53:22 | INFO  | Task 171418d9-41d8-4acf-8b85-76d7cdb64530 is in state STARTED 2025-05-13 23:53:22.245075 | orchestrator | 2025-05-13 23:53:22 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:53:25.299418 | orchestrator | 2025-05-13 23:53:25 | INFO  | Task f39010f6-7d7e-490b-86b3-c5bd2074fe64 is in state STARTED 2025-05-13 23:53:25.299509 | orchestrator | 2025-05-13 23:53:25 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:53:25.299526 | orchestrator | 2025-05-13 23:53:25 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:53:25.310817 | orchestrator | 2025-05-13 23:53:25 | INFO  | Task 171418d9-41d8-4acf-8b85-76d7cdb64530 is in state STARTED 2025-05-13 23:53:25.310960 | orchestrator | 2025-05-13 23:53:25 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:53:28.343669 | orchestrator | 2025-05-13 23:53:28 | INFO  | Task f39010f6-7d7e-490b-86b3-c5bd2074fe64 is in state STARTED 2025-05-13 23:53:28.346212 | orchestrator | 2025-05-13 23:53:28 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:53:28.349536 | orchestrator | 2025-05-13 23:53:28 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:53:28.351579 | orchestrator | 2025-05-13 23:53:28 | INFO  | Task 171418d9-41d8-4acf-8b85-76d7cdb64530 is in state STARTED 2025-05-13 23:53:28.351992 | orchestrator | 2025-05-13 23:53:28 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:53:31.399722 | orchestrator | 2025-05-13 23:53:31 | INFO  | Task f39010f6-7d7e-490b-86b3-c5bd2074fe64 is in state STARTED 2025-05-13 23:53:31.400248 | orchestrator | 2025-05-13 23:53:31 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:53:31.401199 | orchestrator | 2025-05-13 23:53:31 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:53:31.402327 | orchestrator | 2025-05-13 23:53:31 | INFO  | Task 171418d9-41d8-4acf-8b85-76d7cdb64530 is in state STARTED 2025-05-13 23:53:31.402347 | orchestrator | 2025-05-13 23:53:31 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:53:34.451347 | orchestrator | 2025-05-13 23:53:34 | INFO  | Task f4b6fe5a-e4f2-4f86-a8a6-8c971f2f3868 is in state STARTED 2025-05-13 23:53:34.452666 | orchestrator | 2025-05-13 23:53:34 | INFO  | Task f39010f6-7d7e-490b-86b3-c5bd2074fe64 is in state STARTED 2025-05-13 23:53:34.455118 | orchestrator | 2025-05-13 23:53:34 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:53:34.456781 | orchestrator | 2025-05-13 23:53:34 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:53:34.460170 | orchestrator | 2025-05-13 23:53:34 | INFO  | Task 171418d9-41d8-4acf-8b85-76d7cdb64530 is in state SUCCESS 2025-05-13 23:53:34.461414 | orchestrator | 2025-05-13 23:53:34.461471 | orchestrator | 2025-05-13 23:53:34.461478 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-13 23:53:34.461484 | orchestrator | 2025-05-13 23:53:34.461490 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-13 23:53:34.461505 | orchestrator | Tuesday 13 May 2025 23:51:35 +0000 (0:00:00.235) 0:00:00.235 *********** 2025-05-13 23:53:34.461513 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:53:34.461553 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:53:34.461563 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:53:34.461572 | orchestrator | 2025-05-13 23:53:34.461581 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-13 23:53:34.461589 | orchestrator | Tuesday 13 May 2025 23:51:36 +0000 (0:00:00.244) 0:00:00.479 *********** 2025-05-13 23:53:34.461594 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-05-13 23:53:34.461600 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-05-13 23:53:34.461606 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-05-13 23:53:34.461611 | orchestrator | 2025-05-13 23:53:34.461616 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-05-13 23:53:34.461621 | orchestrator | 2025-05-13 23:53:34.461627 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-05-13 23:53:34.461632 | orchestrator | Tuesday 13 May 2025 23:51:36 +0000 (0:00:00.570) 0:00:01.050 *********** 2025-05-13 23:53:34.461637 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:53:34.461643 | orchestrator | 2025-05-13 23:53:34.461649 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-05-13 23:53:34.461654 | orchestrator | Tuesday 13 May 2025 23:51:37 +0000 (0:00:00.995) 0:00:02.045 *********** 2025-05-13 23:53:34.461659 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-05-13 23:53:34.461665 | orchestrator | 2025-05-13 23:53:34.461670 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-05-13 23:53:34.461675 | orchestrator | Tuesday 13 May 2025 23:51:40 +0000 (0:00:03.360) 0:00:05.405 *********** 2025-05-13 23:53:34.461680 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-05-13 23:53:34.461685 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-05-13 23:53:34.461691 | orchestrator | 2025-05-13 23:53:34.461696 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-05-13 23:53:34.461702 | orchestrator | Tuesday 13 May 2025 23:51:46 +0000 (0:00:05.976) 0:00:11.382 *********** 2025-05-13 23:53:34.461707 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-13 23:53:34.461712 | orchestrator | 2025-05-13 23:53:34.461717 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-05-13 23:53:34.461744 | orchestrator | Tuesday 13 May 2025 23:51:50 +0000 (0:00:03.203) 0:00:14.586 *********** 2025-05-13 23:53:34.461750 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-13 23:53:34.461756 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-05-13 23:53:34.461761 | orchestrator | 2025-05-13 23:53:34.461767 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-05-13 23:53:34.461772 | orchestrator | Tuesday 13 May 2025 23:51:53 +0000 (0:00:03.728) 0:00:18.315 *********** 2025-05-13 23:53:34.461777 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-13 23:53:34.461783 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-05-13 23:53:34.461788 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-05-13 23:53:34.461793 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-05-13 23:53:34.461845 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-05-13 23:53:34.461852 | orchestrator | 2025-05-13 23:53:34.461857 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-05-13 23:53:34.461862 | orchestrator | Tuesday 13 May 2025 23:52:10 +0000 (0:00:16.490) 0:00:34.805 *********** 2025-05-13 23:53:34.461867 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-05-13 23:53:34.461873 | orchestrator | 2025-05-13 23:53:34.461909 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-05-13 23:53:34.461916 | orchestrator | Tuesday 13 May 2025 23:52:14 +0000 (0:00:04.370) 0:00:39.176 *********** 2025-05-13 23:53:34.461923 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-13 23:53:34.461943 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-13 23:53:34.461950 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-13 23:53:34.461969 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-13 23:53:34.461980 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-13 23:53:34.461985 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-13 23:53:34.461995 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-13 23:53:34.462003 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-13 23:53:34.462009 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-13 23:53:34.462059 | orchestrator | 2025-05-13 23:53:34.462068 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-05-13 23:53:34.462074 | orchestrator | Tuesday 13 May 2025 23:52:16 +0000 (0:00:02.179) 0:00:41.355 *********** 2025-05-13 23:53:34.462079 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-05-13 23:53:34.462084 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-05-13 23:53:34.462092 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-05-13 23:53:34.462100 | orchestrator | 2025-05-13 23:53:34.462108 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-05-13 23:53:34.462116 | orchestrator | Tuesday 13 May 2025 23:52:17 +0000 (0:00:01.084) 0:00:42.441 *********** 2025-05-13 23:53:34.462124 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:53:34.462133 | orchestrator | 2025-05-13 23:53:34.462141 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-05-13 23:53:34.462148 | orchestrator | Tuesday 13 May 2025 23:52:18 +0000 (0:00:00.229) 0:00:42.671 *********** 2025-05-13 23:53:34.462155 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:53:34.462162 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:53:34.462169 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:53:34.462177 | orchestrator | 2025-05-13 23:53:34.462185 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-05-13 23:53:34.462193 | orchestrator | Tuesday 13 May 2025 23:52:19 +0000 (0:00:00.928) 0:00:43.599 *********** 2025-05-13 23:53:34.462200 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:53:34.462207 | orchestrator | 2025-05-13 23:53:34.462214 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-05-13 23:53:34.462222 | orchestrator | Tuesday 13 May 2025 23:52:19 +0000 (0:00:00.681) 0:00:44.280 *********** 2025-05-13 23:53:34.462235 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-13 23:53:34.462260 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-13 23:53:34.462269 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-13 23:53:34.462284 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-13 23:53:34.462294 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-13 23:53:34.462307 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-13 23:53:34.462316 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-13 23:53:34.462334 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-13 23:53:34.462349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-13 23:53:34.462358 | orchestrator | 2025-05-13 23:53:34.462368 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-05-13 23:53:34.462381 | orchestrator | Tuesday 13 May 2025 23:52:23 +0000 (0:00:04.147) 0:00:48.428 *********** 2025-05-13 23:53:34.462391 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-13 23:53:34.462400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-13 23:53:34.462415 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-13 23:53:34.462423 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:53:34.462438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-13 23:53:34.462454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-13 23:53:34.462463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-13 23:53:34.462472 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:53:34.462480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-13 23:53:34.462494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-13 23:53:34.462501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-13 23:53:34.462507 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:53:34.462512 | orchestrator | 2025-05-13 23:53:34.462517 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-05-13 23:53:34.462522 | orchestrator | Tuesday 13 May 2025 23:52:25 +0000 (0:00:01.574) 0:00:50.002 *********** 2025-05-13 23:53:34.462533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-13 23:53:34.462548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-13 23:53:34.462554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-13 23:53:34.462559 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:53:34.462565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-13 23:53:34.462573 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-13 23:53:34.462579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-13 23:53:34.462587 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:53:34.462597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-13 23:53:34.462603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-13 23:53:34.462608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-13 23:53:34.462613 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:53:34.462618 | orchestrator | 2025-05-13 23:53:34.462623 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-05-13 23:53:34.462629 | orchestrator | Tuesday 13 May 2025 23:52:26 +0000 (0:00:01.327) 0:00:51.330 *********** 2025-05-13 23:53:34.462638 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-13 23:53:34.462647 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-13 23:53:34.462656 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-13 23:53:34.462663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-13 23:53:34.462669 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-13 23:53:34.462677 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-13 23:53:34.462683 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-13 23:53:34.462695 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-13 23:53:34.462701 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-13 23:53:34.462707 | orchestrator | 2025-05-13 23:53:34.462712 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-05-13 23:53:34.462717 | orchestrator | Tuesday 13 May 2025 23:52:31 +0000 (0:00:04.237) 0:00:55.567 *********** 2025-05-13 23:53:34.462723 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:53:34.462728 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:53:34.462734 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:53:34.462739 | orchestrator | 2025-05-13 23:53:34.462744 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-05-13 23:53:34.462749 | orchestrator | Tuesday 13 May 2025 23:52:33 +0000 (0:00:02.058) 0:00:57.626 *********** 2025-05-13 23:53:34.462786 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-13 23:53:34.462795 | orchestrator | 2025-05-13 23:53:34.462803 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-05-13 23:53:34.462811 | orchestrator | Tuesday 13 May 2025 23:52:34 +0000 (0:00:01.142) 0:00:58.769 *********** 2025-05-13 23:53:34.462819 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:53:34.462826 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:53:34.462834 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:53:34.462843 | orchestrator | 2025-05-13 23:53:34.462850 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-05-13 23:53:34.462858 | orchestrator | Tuesday 13 May 2025 23:52:35 +0000 (0:00:00.964) 0:00:59.734 *********** 2025-05-13 23:53:34.462867 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-13 23:53:34.462879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-13 23:53:34.462935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-13 23:53:34.462944 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-13 23:53:34.462952 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-13 23:53:34.462960 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-13 23:53:34.462967 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-13 23:53:34.462985 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-13 23:53:34.462993 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-13 23:53:34.463001 | orchestrator | 2025-05-13 23:53:34.463008 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-05-13 23:53:34.463016 | orchestrator | Tuesday 13 May 2025 23:52:45 +0000 (0:00:10.331) 0:01:10.066 *********** 2025-05-13 23:53:34.463028 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-13 23:53:34.463036 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-13 23:53:34.463043 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-13 23:53:34.463056 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:53:34.463069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-13 23:53:34.463077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-13 23:53:34.463090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-13 23:53:34.463098 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:53:34.463105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-13 23:53:34.463113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-13 23:53:34.463121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-13 23:53:34.463134 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:53:34.463141 | orchestrator | 2025-05-13 23:53:34.463149 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-05-13 23:53:34.463157 | orchestrator | Tuesday 13 May 2025 23:52:46 +0000 (0:00:01.323) 0:01:11.389 *********** 2025-05-13 23:53:34.463168 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-13 23:53:34.463182 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-13 23:53:34.463190 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-13 23:53:34.463198 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-13 23:53:34.463215 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-13 23:53:34.463228 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-13 23:53:34.463236 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-13 23:53:34.463251 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-13 23:53:34.463260 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-13 23:53:34.463269 | orchestrator | 2025-05-13 23:53:34.463277 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-05-13 23:53:34.463284 | orchestrator | Tuesday 13 May 2025 23:52:50 +0000 (0:00:03.930) 0:01:15.319 *********** 2025-05-13 23:53:34.463293 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:53:34.463299 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:53:34.463304 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:53:34.463309 | orchestrator | 2025-05-13 23:53:34.463314 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-05-13 23:53:34.463323 | orchestrator | Tuesday 13 May 2025 23:52:51 +0000 (0:00:00.298) 0:01:15.617 *********** 2025-05-13 23:53:34.463328 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:53:34.463333 | orchestrator | 2025-05-13 23:53:34.463337 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-05-13 23:53:34.463342 | orchestrator | Tuesday 13 May 2025 23:52:53 +0000 (0:00:02.047) 0:01:17.665 *********** 2025-05-13 23:53:34.463347 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:53:34.463351 | orchestrator | 2025-05-13 23:53:34.463356 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-05-13 23:53:34.463361 | orchestrator | Tuesday 13 May 2025 23:52:55 +0000 (0:00:02.066) 0:01:19.732 *********** 2025-05-13 23:53:34.463366 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:53:34.463370 | orchestrator | 2025-05-13 23:53:34.463375 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-05-13 23:53:34.463379 | orchestrator | Tuesday 13 May 2025 23:53:06 +0000 (0:00:11.183) 0:01:30.915 *********** 2025-05-13 23:53:34.463384 | orchestrator | 2025-05-13 23:53:34.463388 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-05-13 23:53:34.463393 | orchestrator | Tuesday 13 May 2025 23:53:06 +0000 (0:00:00.065) 0:01:30.981 *********** 2025-05-13 23:53:34.463397 | orchestrator | 2025-05-13 23:53:34.463402 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-05-13 23:53:34.463407 | orchestrator | Tuesday 13 May 2025 23:53:06 +0000 (0:00:00.063) 0:01:31.044 *********** 2025-05-13 23:53:34.463411 | orchestrator | 2025-05-13 23:53:34.463415 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-05-13 23:53:34.463420 | orchestrator | Tuesday 13 May 2025 23:53:06 +0000 (0:00:00.080) 0:01:31.124 *********** 2025-05-13 23:53:34.463424 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:53:34.463429 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:53:34.463434 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:53:34.463439 | orchestrator | 2025-05-13 23:53:34.463443 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-05-13 23:53:34.463447 | orchestrator | Tuesday 13 May 2025 23:53:13 +0000 (0:00:06.405) 0:01:37.530 *********** 2025-05-13 23:53:34.463452 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:53:34.463457 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:53:34.463462 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:53:34.463467 | orchestrator | 2025-05-13 23:53:34.463475 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-05-13 23:53:34.463480 | orchestrator | Tuesday 13 May 2025 23:53:20 +0000 (0:00:07.667) 0:01:45.198 *********** 2025-05-13 23:53:34.463485 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:53:34.463490 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:53:34.463494 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:53:34.463499 | orchestrator | 2025-05-13 23:53:34.463504 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 23:53:34.463510 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-13 23:53:34.463516 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-13 23:53:34.463522 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-13 23:53:34.463526 | orchestrator | 2025-05-13 23:53:34.463531 | orchestrator | 2025-05-13 23:53:34.463536 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 23:53:34.463541 | orchestrator | Tuesday 13 May 2025 23:53:31 +0000 (0:00:10.541) 0:01:55.739 *********** 2025-05-13 23:53:34.463545 | orchestrator | =============================================================================== 2025-05-13 23:53:34.463550 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 16.49s 2025-05-13 23:53:34.463563 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 11.18s 2025-05-13 23:53:34.463568 | orchestrator | barbican : Restart barbican-worker container --------------------------- 10.54s 2025-05-13 23:53:34.463573 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 10.33s 2025-05-13 23:53:34.463578 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 7.67s 2025-05-13 23:53:34.463583 | orchestrator | barbican : Restart barbican-api container ------------------------------- 6.41s 2025-05-13 23:53:34.463587 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 5.98s 2025-05-13 23:53:34.463592 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.37s 2025-05-13 23:53:34.463597 | orchestrator | barbican : Copying over config.json files for services ------------------ 4.24s 2025-05-13 23:53:34.463601 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 4.15s 2025-05-13 23:53:34.463606 | orchestrator | barbican : Check barbican containers ------------------------------------ 3.93s 2025-05-13 23:53:34.463610 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.73s 2025-05-13 23:53:34.463615 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.36s 2025-05-13 23:53:34.463620 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.20s 2025-05-13 23:53:34.463625 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.18s 2025-05-13 23:53:34.463630 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.07s 2025-05-13 23:53:34.463634 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.06s 2025-05-13 23:53:34.463639 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.05s 2025-05-13 23:53:34.463644 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS certificate --- 1.57s 2025-05-13 23:53:34.463648 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS key ---- 1.33s 2025-05-13 23:53:34.463653 | orchestrator | 2025-05-13 23:53:34 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:53:37.501606 | orchestrator | 2025-05-13 23:53:37 | INFO  | Task f4b6fe5a-e4f2-4f86-a8a6-8c971f2f3868 is in state STARTED 2025-05-13 23:53:37.502078 | orchestrator | 2025-05-13 23:53:37 | INFO  | Task f39010f6-7d7e-490b-86b3-c5bd2074fe64 is in state STARTED 2025-05-13 23:53:37.502822 | orchestrator | 2025-05-13 23:53:37 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:53:37.503666 | orchestrator | 2025-05-13 23:53:37 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:53:37.503713 | orchestrator | 2025-05-13 23:53:37 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:53:40.537611 | orchestrator | 2025-05-13 23:53:40 | INFO  | Task f4b6fe5a-e4f2-4f86-a8a6-8c971f2f3868 is in state STARTED 2025-05-13 23:53:40.538215 | orchestrator | 2025-05-13 23:53:40 | INFO  | Task f39010f6-7d7e-490b-86b3-c5bd2074fe64 is in state STARTED 2025-05-13 23:53:40.545057 | orchestrator | 2025-05-13 23:53:40 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:53:40.545807 | orchestrator | 2025-05-13 23:53:40 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:53:40.545852 | orchestrator | 2025-05-13 23:53:40 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:53:43.575534 | orchestrator | 2025-05-13 23:53:43 | INFO  | Task f4b6fe5a-e4f2-4f86-a8a6-8c971f2f3868 is in state STARTED 2025-05-13 23:53:43.577491 | orchestrator | 2025-05-13 23:53:43 | INFO  | Task f39010f6-7d7e-490b-86b3-c5bd2074fe64 is in state STARTED 2025-05-13 23:53:43.577585 | orchestrator | 2025-05-13 23:53:43 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:53:43.577802 | orchestrator | 2025-05-13 23:53:43 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:53:43.577832 | orchestrator | 2025-05-13 23:53:43 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:53:46.615702 | orchestrator | 2025-05-13 23:53:46 | INFO  | Task f4b6fe5a-e4f2-4f86-a8a6-8c971f2f3868 is in state STARTED 2025-05-13 23:53:46.616033 | orchestrator | 2025-05-13 23:53:46 | INFO  | Task f39010f6-7d7e-490b-86b3-c5bd2074fe64 is in state STARTED 2025-05-13 23:53:46.617956 | orchestrator | 2025-05-13 23:53:46 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:53:46.618519 | orchestrator | 2025-05-13 23:53:46 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:53:46.618604 | orchestrator | 2025-05-13 23:53:46 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:53:49.645983 | orchestrator | 2025-05-13 23:53:49 | INFO  | Task f4b6fe5a-e4f2-4f86-a8a6-8c971f2f3868 is in state STARTED 2025-05-13 23:53:49.647175 | orchestrator | 2025-05-13 23:53:49 | INFO  | Task f39010f6-7d7e-490b-86b3-c5bd2074fe64 is in state STARTED 2025-05-13 23:53:49.649599 | orchestrator | 2025-05-13 23:53:49 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:53:49.650404 | orchestrator | 2025-05-13 23:53:49 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:53:49.650436 | orchestrator | 2025-05-13 23:53:49 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:53:52.693072 | orchestrator | 2025-05-13 23:53:52 | INFO  | Task f4b6fe5a-e4f2-4f86-a8a6-8c971f2f3868 is in state STARTED 2025-05-13 23:53:52.693476 | orchestrator | 2025-05-13 23:53:52 | INFO  | Task f39010f6-7d7e-490b-86b3-c5bd2074fe64 is in state STARTED 2025-05-13 23:53:52.695294 | orchestrator | 2025-05-13 23:53:52 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:53:52.699752 | orchestrator | 2025-05-13 23:53:52 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:53:52.699815 | orchestrator | 2025-05-13 23:53:52 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:53:55.741156 | orchestrator | 2025-05-13 23:53:55 | INFO  | Task f4b6fe5a-e4f2-4f86-a8a6-8c971f2f3868 is in state STARTED 2025-05-13 23:53:55.741491 | orchestrator | 2025-05-13 23:53:55 | INFO  | Task f39010f6-7d7e-490b-86b3-c5bd2074fe64 is in state STARTED 2025-05-13 23:53:55.743208 | orchestrator | 2025-05-13 23:53:55 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:53:55.745046 | orchestrator | 2025-05-13 23:53:55 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:53:55.745347 | orchestrator | 2025-05-13 23:53:55 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:53:58.787797 | orchestrator | 2025-05-13 23:53:58 | INFO  | Task f4b6fe5a-e4f2-4f86-a8a6-8c971f2f3868 is in state STARTED 2025-05-13 23:53:58.788770 | orchestrator | 2025-05-13 23:53:58 | INFO  | Task f39010f6-7d7e-490b-86b3-c5bd2074fe64 is in state STARTED 2025-05-13 23:53:58.790552 | orchestrator | 2025-05-13 23:53:58 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:53:58.791749 | orchestrator | 2025-05-13 23:53:58 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:53:58.791825 | orchestrator | 2025-05-13 23:53:58 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:54:01.832471 | orchestrator | 2025-05-13 23:54:01 | INFO  | Task f4b6fe5a-e4f2-4f86-a8a6-8c971f2f3868 is in state STARTED 2025-05-13 23:54:01.835140 | orchestrator | 2025-05-13 23:54:01 | INFO  | Task f39010f6-7d7e-490b-86b3-c5bd2074fe64 is in state STARTED 2025-05-13 23:54:01.838250 | orchestrator | 2025-05-13 23:54:01 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:54:01.840167 | orchestrator | 2025-05-13 23:54:01 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:54:01.841035 | orchestrator | 2025-05-13 23:54:01 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:54:04.888332 | orchestrator | 2025-05-13 23:54:04 | INFO  | Task f4b6fe5a-e4f2-4f86-a8a6-8c971f2f3868 is in state STARTED 2025-05-13 23:54:04.888419 | orchestrator | 2025-05-13 23:54:04 | INFO  | Task f39010f6-7d7e-490b-86b3-c5bd2074fe64 is in state STARTED 2025-05-13 23:54:04.890071 | orchestrator | 2025-05-13 23:54:04 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:54:04.890760 | orchestrator | 2025-05-13 23:54:04 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:54:04.890811 | orchestrator | 2025-05-13 23:54:04 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:54:07.927166 | orchestrator | 2025-05-13 23:54:07 | INFO  | Task f4b6fe5a-e4f2-4f86-a8a6-8c971f2f3868 is in state STARTED 2025-05-13 23:54:07.928303 | orchestrator | 2025-05-13 23:54:07 | INFO  | Task f39010f6-7d7e-490b-86b3-c5bd2074fe64 is in state STARTED 2025-05-13 23:54:07.929652 | orchestrator | 2025-05-13 23:54:07 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:54:07.931565 | orchestrator | 2025-05-13 23:54:07 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:54:07.931591 | orchestrator | 2025-05-13 23:54:07 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:54:10.976383 | orchestrator | 2025-05-13 23:54:10 | INFO  | Task f4b6fe5a-e4f2-4f86-a8a6-8c971f2f3868 is in state STARTED 2025-05-13 23:54:10.976660 | orchestrator | 2025-05-13 23:54:10 | INFO  | Task f39010f6-7d7e-490b-86b3-c5bd2074fe64 is in state STARTED 2025-05-13 23:54:10.977607 | orchestrator | 2025-05-13 23:54:10 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:54:10.978540 | orchestrator | 2025-05-13 23:54:10 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:54:10.978567 | orchestrator | 2025-05-13 23:54:10 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:54:14.019559 | orchestrator | 2025-05-13 23:54:14 | INFO  | Task f4b6fe5a-e4f2-4f86-a8a6-8c971f2f3868 is in state STARTED 2025-05-13 23:54:14.019663 | orchestrator | 2025-05-13 23:54:14 | INFO  | Task f39010f6-7d7e-490b-86b3-c5bd2074fe64 is in state STARTED 2025-05-13 23:54:14.023165 | orchestrator | 2025-05-13 23:54:14 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:54:14.024172 | orchestrator | 2025-05-13 23:54:14 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:54:14.024280 | orchestrator | 2025-05-13 23:54:14 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:54:17.071169 | orchestrator | 2025-05-13 23:54:17 | INFO  | Task f4b6fe5a-e4f2-4f86-a8a6-8c971f2f3868 is in state STARTED 2025-05-13 23:54:17.074129 | orchestrator | 2025-05-13 23:54:17 | INFO  | Task f39010f6-7d7e-490b-86b3-c5bd2074fe64 is in state STARTED 2025-05-13 23:54:17.076471 | orchestrator | 2025-05-13 23:54:17 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:54:17.077061 | orchestrator | 2025-05-13 23:54:17 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:54:17.077156 | orchestrator | 2025-05-13 23:54:17 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:54:20.121552 | orchestrator | 2025-05-13 23:54:20 | INFO  | Task f4b6fe5a-e4f2-4f86-a8a6-8c971f2f3868 is in state STARTED 2025-05-13 23:54:20.122069 | orchestrator | 2025-05-13 23:54:20 | INFO  | Task f39010f6-7d7e-490b-86b3-c5bd2074fe64 is in state STARTED 2025-05-13 23:54:20.122761 | orchestrator | 2025-05-13 23:54:20 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:54:20.123612 | orchestrator | 2025-05-13 23:54:20 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:54:20.123640 | orchestrator | 2025-05-13 23:54:20 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:54:23.172063 | orchestrator | 2025-05-13 23:54:23 | INFO  | Task f4b6fe5a-e4f2-4f86-a8a6-8c971f2f3868 is in state STARTED 2025-05-13 23:54:23.172839 | orchestrator | 2025-05-13 23:54:23 | INFO  | Task f39010f6-7d7e-490b-86b3-c5bd2074fe64 is in state STARTED 2025-05-13 23:54:23.174206 | orchestrator | 2025-05-13 23:54:23 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:54:23.176143 | orchestrator | 2025-05-13 23:54:23 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:54:23.176183 | orchestrator | 2025-05-13 23:54:23 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:54:26.224890 | orchestrator | 2025-05-13 23:54:26 | INFO  | Task f4b6fe5a-e4f2-4f86-a8a6-8c971f2f3868 is in state STARTED 2025-05-13 23:54:26.227253 | orchestrator | 2025-05-13 23:54:26 | INFO  | Task f39010f6-7d7e-490b-86b3-c5bd2074fe64 is in state STARTED 2025-05-13 23:54:26.228713 | orchestrator | 2025-05-13 23:54:26 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:54:26.229717 | orchestrator | 2025-05-13 23:54:26 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:54:26.229843 | orchestrator | 2025-05-13 23:54:26 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:54:29.272375 | orchestrator | 2025-05-13 23:54:29 | INFO  | Task f4b6fe5a-e4f2-4f86-a8a6-8c971f2f3868 is in state STARTED 2025-05-13 23:54:29.275716 | orchestrator | 2025-05-13 23:54:29 | INFO  | Task f39010f6-7d7e-490b-86b3-c5bd2074fe64 is in state STARTED 2025-05-13 23:54:29.275882 | orchestrator | 2025-05-13 23:54:29 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:54:29.275961 | orchestrator | 2025-05-13 23:54:29 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:54:29.276180 | orchestrator | 2025-05-13 23:54:29 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:54:32.327160 | orchestrator | 2025-05-13 23:54:32 | INFO  | Task f4b6fe5a-e4f2-4f86-a8a6-8c971f2f3868 is in state STARTED 2025-05-13 23:54:32.327262 | orchestrator | 2025-05-13 23:54:32 | INFO  | Task f39010f6-7d7e-490b-86b3-c5bd2074fe64 is in state STARTED 2025-05-13 23:54:32.328195 | orchestrator | 2025-05-13 23:54:32 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:54:32.329576 | orchestrator | 2025-05-13 23:54:32 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:54:32.329869 | orchestrator | 2025-05-13 23:54:32 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:54:35.375207 | orchestrator | 2025-05-13 23:54:35 | INFO  | Task f4b6fe5a-e4f2-4f86-a8a6-8c971f2f3868 is in state STARTED 2025-05-13 23:54:35.376955 | orchestrator | 2025-05-13 23:54:35 | INFO  | Task f39010f6-7d7e-490b-86b3-c5bd2074fe64 is in state STARTED 2025-05-13 23:54:35.378175 | orchestrator | 2025-05-13 23:54:35 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:54:35.378889 | orchestrator | 2025-05-13 23:54:35 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:54:35.378991 | orchestrator | 2025-05-13 23:54:35 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:54:38.423676 | orchestrator | 2025-05-13 23:54:38 | INFO  | Task f4b6fe5a-e4f2-4f86-a8a6-8c971f2f3868 is in state STARTED 2025-05-13 23:54:38.427590 | orchestrator | 2025-05-13 23:54:38 | INFO  | Task f39010f6-7d7e-490b-86b3-c5bd2074fe64 is in state STARTED 2025-05-13 23:54:38.431270 | orchestrator | 2025-05-13 23:54:38 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:54:38.435386 | orchestrator | 2025-05-13 23:54:38 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:54:38.435455 | orchestrator | 2025-05-13 23:54:38 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:54:41.494376 | orchestrator | 2025-05-13 23:54:41 | INFO  | Task f4b6fe5a-e4f2-4f86-a8a6-8c971f2f3868 is in state STARTED 2025-05-13 23:54:41.495003 | orchestrator | 2025-05-13 23:54:41 | INFO  | Task f39010f6-7d7e-490b-86b3-c5bd2074fe64 is in state STARTED 2025-05-13 23:54:41.495894 | orchestrator | 2025-05-13 23:54:41 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:54:41.496984 | orchestrator | 2025-05-13 23:54:41 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:54:41.497001 | orchestrator | 2025-05-13 23:54:41 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:54:44.550462 | orchestrator | 2025-05-13 23:54:44 | INFO  | Task f4b6fe5a-e4f2-4f86-a8a6-8c971f2f3868 is in state STARTED 2025-05-13 23:54:44.552314 | orchestrator | 2025-05-13 23:54:44 | INFO  | Task f39010f6-7d7e-490b-86b3-c5bd2074fe64 is in state STARTED 2025-05-13 23:54:44.552928 | orchestrator | 2025-05-13 23:54:44 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:54:44.553484 | orchestrator | 2025-05-13 23:54:44 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:54:44.553572 | orchestrator | 2025-05-13 23:54:44 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:54:47.595598 | orchestrator | 2025-05-13 23:54:47 | INFO  | Task f4b6fe5a-e4f2-4f86-a8a6-8c971f2f3868 is in state STARTED 2025-05-13 23:54:47.597350 | orchestrator | 2025-05-13 23:54:47 | INFO  | Task f39010f6-7d7e-490b-86b3-c5bd2074fe64 is in state STARTED 2025-05-13 23:54:47.597716 | orchestrator | 2025-05-13 23:54:47 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:54:47.598573 | orchestrator | 2025-05-13 23:54:47 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:54:47.598611 | orchestrator | 2025-05-13 23:54:47 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:54:50.648816 | orchestrator | 2025-05-13 23:54:50 | INFO  | Task f4b6fe5a-e4f2-4f86-a8a6-8c971f2f3868 is in state STARTED 2025-05-13 23:54:50.650468 | orchestrator | 2025-05-13 23:54:50 | INFO  | Task f39010f6-7d7e-490b-86b3-c5bd2074fe64 is in state STARTED 2025-05-13 23:54:50.652863 | orchestrator | 2025-05-13 23:54:50 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:54:50.654985 | orchestrator | 2025-05-13 23:54:50 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:54:50.655487 | orchestrator | 2025-05-13 23:54:50 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:54:53.699871 | orchestrator | 2025-05-13 23:54:53 | INFO  | Task f4b6fe5a-e4f2-4f86-a8a6-8c971f2f3868 is in state STARTED 2025-05-13 23:54:53.704657 | orchestrator | 2025-05-13 23:54:53 | INFO  | Task f39010f6-7d7e-490b-86b3-c5bd2074fe64 is in state STARTED 2025-05-13 23:54:53.705985 | orchestrator | 2025-05-13 23:54:53 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:54:53.708211 | orchestrator | 2025-05-13 23:54:53 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:54:53.708256 | orchestrator | 2025-05-13 23:54:53 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:54:56.769756 | orchestrator | 2025-05-13 23:54:56 | INFO  | Task f4b6fe5a-e4f2-4f86-a8a6-8c971f2f3868 is in state STARTED 2025-05-13 23:54:56.770179 | orchestrator | 2025-05-13 23:54:56 | INFO  | Task f39010f6-7d7e-490b-86b3-c5bd2074fe64 is in state STARTED 2025-05-13 23:54:56.773116 | orchestrator | 2025-05-13 23:54:56 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:54:56.775376 | orchestrator | 2025-05-13 23:54:56 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:54:56.775426 | orchestrator | 2025-05-13 23:54:56 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:54:59.839851 | orchestrator | 2025-05-13 23:54:59 | INFO  | Task f4b6fe5a-e4f2-4f86-a8a6-8c971f2f3868 is in state STARTED 2025-05-13 23:54:59.840435 | orchestrator | 2025-05-13 23:54:59 | INFO  | Task f39010f6-7d7e-490b-86b3-c5bd2074fe64 is in state STARTED 2025-05-13 23:54:59.841023 | orchestrator | 2025-05-13 23:54:59 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:54:59.841817 | orchestrator | 2025-05-13 23:54:59 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:54:59.841841 | orchestrator | 2025-05-13 23:54:59 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:55:02.901125 | orchestrator | 2025-05-13 23:55:02 | INFO  | Task f4b6fe5a-e4f2-4f86-a8a6-8c971f2f3868 is in state STARTED 2025-05-13 23:55:02.902907 | orchestrator | 2025-05-13 23:55:02 | INFO  | Task f39010f6-7d7e-490b-86b3-c5bd2074fe64 is in state STARTED 2025-05-13 23:55:02.904285 | orchestrator | 2025-05-13 23:55:02 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:55:02.906329 | orchestrator | 2025-05-13 23:55:02 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:55:02.906404 | orchestrator | 2025-05-13 23:55:02 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:55:05.948839 | orchestrator | 2025-05-13 23:55:05 | INFO  | Task f4b6fe5a-e4f2-4f86-a8a6-8c971f2f3868 is in state SUCCESS 2025-05-13 23:55:05.949650 | orchestrator | 2025-05-13 23:55:05 | INFO  | Task f39010f6-7d7e-490b-86b3-c5bd2074fe64 is in state STARTED 2025-05-13 23:55:05.950602 | orchestrator | 2025-05-13 23:55:05 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:55:05.950894 | orchestrator | 2025-05-13 23:55:05 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:55:05.953051 | orchestrator | 2025-05-13 23:55:05 | INFO  | Task 56cb7757-54de-4efc-b413-7ace963468ec is in state STARTED 2025-05-13 23:55:05.953091 | orchestrator | 2025-05-13 23:55:05 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:55:08.986948 | orchestrator | 2025-05-13 23:55:08 | INFO  | Task f39010f6-7d7e-490b-86b3-c5bd2074fe64 is in state STARTED 2025-05-13 23:55:08.991485 | orchestrator | 2025-05-13 23:55:08 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:55:08.991835 | orchestrator | 2025-05-13 23:55:08 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:55:08.993511 | orchestrator | 2025-05-13 23:55:08 | INFO  | Task 56cb7757-54de-4efc-b413-7ace963468ec is in state STARTED 2025-05-13 23:55:08.993574 | orchestrator | 2025-05-13 23:55:08 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:55:12.046496 | orchestrator | 2025-05-13 23:55:12 | INFO  | Task f39010f6-7d7e-490b-86b3-c5bd2074fe64 is in state STARTED 2025-05-13 23:55:12.047642 | orchestrator | 2025-05-13 23:55:12 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:55:12.049452 | orchestrator | 2025-05-13 23:55:12 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:55:12.051236 | orchestrator | 2025-05-13 23:55:12 | INFO  | Task 56cb7757-54de-4efc-b413-7ace963468ec is in state STARTED 2025-05-13 23:55:12.051513 | orchestrator | 2025-05-13 23:55:12 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:55:15.098839 | orchestrator | 2025-05-13 23:55:15 | INFO  | Task f39010f6-7d7e-490b-86b3-c5bd2074fe64 is in state STARTED 2025-05-13 23:55:15.100424 | orchestrator | 2025-05-13 23:55:15 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:55:15.108758 | orchestrator | 2025-05-13 23:55:15 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:55:15.108836 | orchestrator | 2025-05-13 23:55:15 | INFO  | Task 56cb7757-54de-4efc-b413-7ace963468ec is in state STARTED 2025-05-13 23:55:15.108847 | orchestrator | 2025-05-13 23:55:15 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:55:18.160344 | orchestrator | 2025-05-13 23:55:18 | INFO  | Task f39010f6-7d7e-490b-86b3-c5bd2074fe64 is in state STARTED 2025-05-13 23:55:18.162104 | orchestrator | 2025-05-13 23:55:18 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:55:18.164042 | orchestrator | 2025-05-13 23:55:18 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:55:18.166825 | orchestrator | 2025-05-13 23:55:18 | INFO  | Task 56cb7757-54de-4efc-b413-7ace963468ec is in state STARTED 2025-05-13 23:55:18.166883 | orchestrator | 2025-05-13 23:55:18 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:55:21.208096 | orchestrator | 2025-05-13 23:55:21 | INFO  | Task f39010f6-7d7e-490b-86b3-c5bd2074fe64 is in state STARTED 2025-05-13 23:55:21.209663 | orchestrator | 2025-05-13 23:55:21 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:55:21.218599 | orchestrator | 2025-05-13 23:55:21 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:55:21.219517 | orchestrator | 2025-05-13 23:55:21 | INFO  | Task 56cb7757-54de-4efc-b413-7ace963468ec is in state STARTED 2025-05-13 23:55:21.219549 | orchestrator | 2025-05-13 23:55:21 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:55:24.267257 | orchestrator | 2025-05-13 23:55:24 | INFO  | Task f39010f6-7d7e-490b-86b3-c5bd2074fe64 is in state STARTED 2025-05-13 23:55:24.267798 | orchestrator | 2025-05-13 23:55:24 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:55:24.268632 | orchestrator | 2025-05-13 23:55:24 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:55:24.269605 | orchestrator | 2025-05-13 23:55:24 | INFO  | Task 56cb7757-54de-4efc-b413-7ace963468ec is in state STARTED 2025-05-13 23:55:24.269644 | orchestrator | 2025-05-13 23:55:24 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:55:27.317923 | orchestrator | 2025-05-13 23:55:27 | INFO  | Task f39010f6-7d7e-490b-86b3-c5bd2074fe64 is in state STARTED 2025-05-13 23:55:27.318968 | orchestrator | 2025-05-13 23:55:27 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:55:27.320345 | orchestrator | 2025-05-13 23:55:27 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:55:27.321671 | orchestrator | 2025-05-13 23:55:27 | INFO  | Task 56cb7757-54de-4efc-b413-7ace963468ec is in state STARTED 2025-05-13 23:55:27.322117 | orchestrator | 2025-05-13 23:55:27 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:55:30.369826 | orchestrator | 2025-05-13 23:55:30 | INFO  | Task f39010f6-7d7e-490b-86b3-c5bd2074fe64 is in state STARTED 2025-05-13 23:55:30.371489 | orchestrator | 2025-05-13 23:55:30 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:55:30.374116 | orchestrator | 2025-05-13 23:55:30 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state STARTED 2025-05-13 23:55:30.376075 | orchestrator | 2025-05-13 23:55:30 | INFO  | Task 56cb7757-54de-4efc-b413-7ace963468ec is in state STARTED 2025-05-13 23:55:30.376102 | orchestrator | 2025-05-13 23:55:30 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:55:33.421624 | orchestrator | 2025-05-13 23:55:33 | INFO  | Task f39010f6-7d7e-490b-86b3-c5bd2074fe64 is in state STARTED 2025-05-13 23:55:33.422376 | orchestrator | 2025-05-13 23:55:33 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:55:33.424427 | orchestrator | 2025-05-13 23:55:33 | INFO  | Task 6ddd659b-3845-400e-80ef-1bd5ce031be2 is in state SUCCESS 2025-05-13 23:55:33.426176 | orchestrator | 2025-05-13 23:55:33.426209 | orchestrator | 2025-05-13 23:55:33.426216 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-05-13 23:55:33.426223 | orchestrator | 2025-05-13 23:55:33.426229 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-05-13 23:55:33.426235 | orchestrator | Tuesday 13 May 2025 23:53:37 +0000 (0:00:00.115) 0:00:00.115 *********** 2025-05-13 23:55:33.426241 | orchestrator | changed: [localhost] 2025-05-13 23:55:33.426248 | orchestrator | 2025-05-13 23:55:33.426253 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-05-13 23:55:33.426259 | orchestrator | Tuesday 13 May 2025 23:53:38 +0000 (0:00:01.110) 0:00:01.226 *********** 2025-05-13 23:55:33.426265 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (3 retries left). 2025-05-13 23:55:33.426270 | orchestrator | changed: [localhost] 2025-05-13 23:55:33.426275 | orchestrator | 2025-05-13 23:55:33.426281 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-05-13 23:55:33.426286 | orchestrator | Tuesday 13 May 2025 23:54:35 +0000 (0:00:57.308) 0:00:58.535 *********** 2025-05-13 23:55:33.426291 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent kernel (3 retries left). 2025-05-13 23:55:33.426297 | orchestrator | changed: [localhost] 2025-05-13 23:55:33.426302 | orchestrator | 2025-05-13 23:55:33.426307 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-13 23:55:33.426313 | orchestrator | 2025-05-13 23:55:33.426318 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-13 23:55:33.426324 | orchestrator | Tuesday 13 May 2025 23:55:02 +0000 (0:00:27.102) 0:01:25.638 *********** 2025-05-13 23:55:33.426329 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:55:33.426335 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:55:33.426340 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:55:33.426346 | orchestrator | 2025-05-13 23:55:33.426351 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-13 23:55:33.426356 | orchestrator | Tuesday 13 May 2025 23:55:03 +0000 (0:00:00.321) 0:01:25.959 *********** 2025-05-13 23:55:33.426361 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-05-13 23:55:33.426366 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-05-13 23:55:33.426370 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-05-13 23:55:33.426375 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-05-13 23:55:33.426398 | orchestrator | 2025-05-13 23:55:33.426403 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-05-13 23:55:33.426408 | orchestrator | skipping: no hosts matched 2025-05-13 23:55:33.426448 | orchestrator | 2025-05-13 23:55:33.426452 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 23:55:33.426457 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 23:55:33.426464 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 23:55:33.426470 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 23:55:33.426475 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 23:55:33.426480 | orchestrator | 2025-05-13 23:55:33.426508 | orchestrator | 2025-05-13 23:55:33.426513 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 23:55:33.426526 | orchestrator | Tuesday 13 May 2025 23:55:03 +0000 (0:00:00.731) 0:01:26.691 *********** 2025-05-13 23:55:33.426531 | orchestrator | =============================================================================== 2025-05-13 23:55:33.426536 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 57.31s 2025-05-13 23:55:33.426540 | orchestrator | Download ironic-agent kernel ------------------------------------------- 27.10s 2025-05-13 23:55:33.426565 | orchestrator | Ensure the destination directory exists --------------------------------- 1.11s 2025-05-13 23:55:33.426570 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.73s 2025-05-13 23:55:33.426574 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.32s 2025-05-13 23:55:33.426579 | orchestrator | 2025-05-13 23:55:33.426583 | orchestrator | 2025-05-13 23:55:33.426587 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-13 23:55:33.426592 | orchestrator | 2025-05-13 23:55:33.426596 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-13 23:55:33.426601 | orchestrator | Tuesday 13 May 2025 23:50:21 +0000 (0:00:00.258) 0:00:00.258 *********** 2025-05-13 23:55:33.426605 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:55:33.426610 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:55:33.426619 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:55:33.426623 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:55:33.426628 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:55:33.426632 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:55:33.426637 | orchestrator | 2025-05-13 23:55:33.426651 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-13 23:55:33.426656 | orchestrator | Tuesday 13 May 2025 23:50:22 +0000 (0:00:00.682) 0:00:00.941 *********** 2025-05-13 23:55:33.426660 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-05-13 23:55:33.426664 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-05-13 23:55:33.426669 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-05-13 23:55:33.426673 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-05-13 23:55:33.426698 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-05-13 23:55:33.426706 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-05-13 23:55:33.426713 | orchestrator | 2025-05-13 23:55:33.426731 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-05-13 23:55:33.426739 | orchestrator | 2025-05-13 23:55:33.426746 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-05-13 23:55:33.426750 | orchestrator | Tuesday 13 May 2025 23:50:23 +0000 (0:00:00.661) 0:00:01.603 *********** 2025-05-13 23:55:33.426755 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:55:33.426765 | orchestrator | 2025-05-13 23:55:33.426770 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-05-13 23:55:33.426774 | orchestrator | Tuesday 13 May 2025 23:50:24 +0000 (0:00:01.233) 0:00:02.837 *********** 2025-05-13 23:55:33.426779 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:55:33.426783 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:55:33.426788 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:55:33.426792 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:55:33.426797 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:55:33.426801 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:55:33.426805 | orchestrator | 2025-05-13 23:55:33.426810 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-05-13 23:55:33.426814 | orchestrator | Tuesday 13 May 2025 23:50:25 +0000 (0:00:01.282) 0:00:04.119 *********** 2025-05-13 23:55:33.426818 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:55:33.426823 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:55:33.426827 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:55:33.426831 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:55:33.426836 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:55:33.426840 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:55:33.426845 | orchestrator | 2025-05-13 23:55:33.426849 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-05-13 23:55:33.426853 | orchestrator | Tuesday 13 May 2025 23:50:26 +0000 (0:00:01.049) 0:00:05.169 *********** 2025-05-13 23:55:33.426858 | orchestrator | ok: [testbed-node-0] => { 2025-05-13 23:55:33.426862 | orchestrator |  "changed": false, 2025-05-13 23:55:33.426867 | orchestrator |  "msg": "All assertions passed" 2025-05-13 23:55:33.426872 | orchestrator | } 2025-05-13 23:55:33.426877 | orchestrator | ok: [testbed-node-1] => { 2025-05-13 23:55:33.426881 | orchestrator |  "changed": false, 2025-05-13 23:55:33.426886 | orchestrator |  "msg": "All assertions passed" 2025-05-13 23:55:33.426890 | orchestrator | } 2025-05-13 23:55:33.426894 | orchestrator | ok: [testbed-node-2] => { 2025-05-13 23:55:33.426899 | orchestrator |  "changed": false, 2025-05-13 23:55:33.426903 | orchestrator |  "msg": "All assertions passed" 2025-05-13 23:55:33.426908 | orchestrator | } 2025-05-13 23:55:33.426912 | orchestrator | ok: [testbed-node-3] => { 2025-05-13 23:55:33.426916 | orchestrator |  "changed": false, 2025-05-13 23:55:33.426921 | orchestrator |  "msg": "All assertions passed" 2025-05-13 23:55:33.426925 | orchestrator | } 2025-05-13 23:55:33.426929 | orchestrator | ok: [testbed-node-4] => { 2025-05-13 23:55:33.426934 | orchestrator |  "changed": false, 2025-05-13 23:55:33.426938 | orchestrator |  "msg": "All assertions passed" 2025-05-13 23:55:33.426943 | orchestrator | } 2025-05-13 23:55:33.426947 | orchestrator | ok: [testbed-node-5] => { 2025-05-13 23:55:33.426951 | orchestrator |  "changed": false, 2025-05-13 23:55:33.426956 | orchestrator |  "msg": "All assertions passed" 2025-05-13 23:55:33.426960 | orchestrator | } 2025-05-13 23:55:33.426964 | orchestrator | 2025-05-13 23:55:33.426969 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-05-13 23:55:33.426973 | orchestrator | Tuesday 13 May 2025 23:50:27 +0000 (0:00:00.775) 0:00:05.945 *********** 2025-05-13 23:55:33.426978 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:55:33.426982 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:55:33.426987 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:55:33.426991 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:55:33.426996 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:55:33.427000 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:55:33.427004 | orchestrator | 2025-05-13 23:55:33.427009 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-05-13 23:55:33.427013 | orchestrator | Tuesday 13 May 2025 23:50:28 +0000 (0:00:00.644) 0:00:06.589 *********** 2025-05-13 23:55:33.427018 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-05-13 23:55:33.427022 | orchestrator | 2025-05-13 23:55:33.427027 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-05-13 23:55:33.427034 | orchestrator | Tuesday 13 May 2025 23:50:31 +0000 (0:00:03.295) 0:00:09.885 *********** 2025-05-13 23:55:33.427039 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-05-13 23:55:33.427044 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-05-13 23:55:33.427048 | orchestrator | 2025-05-13 23:55:33.427053 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-05-13 23:55:33.427057 | orchestrator | Tuesday 13 May 2025 23:50:37 +0000 (0:00:06.339) 0:00:16.225 *********** 2025-05-13 23:55:33.427062 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-13 23:55:33.427066 | orchestrator | 2025-05-13 23:55:33.427070 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-05-13 23:55:33.427075 | orchestrator | Tuesday 13 May 2025 23:50:40 +0000 (0:00:03.037) 0:00:19.262 *********** 2025-05-13 23:55:33.427079 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-13 23:55:33.427083 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-05-13 23:55:33.427088 | orchestrator | 2025-05-13 23:55:33.427096 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-05-13 23:55:33.427100 | orchestrator | Tuesday 13 May 2025 23:50:44 +0000 (0:00:03.907) 0:00:23.170 *********** 2025-05-13 23:55:33.427104 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-13 23:55:33.427109 | orchestrator | 2025-05-13 23:55:33.427113 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-05-13 23:55:33.427118 | orchestrator | Tuesday 13 May 2025 23:50:47 +0000 (0:00:03.223) 0:00:26.393 *********** 2025-05-13 23:55:33.427122 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-05-13 23:55:33.427127 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-05-13 23:55:33.427131 | orchestrator | 2025-05-13 23:55:33.427135 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-05-13 23:55:33.427143 | orchestrator | Tuesday 13 May 2025 23:50:55 +0000 (0:00:07.639) 0:00:34.033 *********** 2025-05-13 23:55:33.427148 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:55:33.427152 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:55:33.427157 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:55:33.427161 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:55:33.427166 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:55:33.427170 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:55:33.427175 | orchestrator | 2025-05-13 23:55:33.427179 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-05-13 23:55:33.427183 | orchestrator | Tuesday 13 May 2025 23:50:56 +0000 (0:00:00.868) 0:00:34.901 *********** 2025-05-13 23:55:33.427188 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:55:33.427192 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:55:33.427197 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:55:33.427201 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:55:33.427206 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:55:33.427210 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:55:33.427214 | orchestrator | 2025-05-13 23:55:33.427219 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-05-13 23:55:33.427223 | orchestrator | Tuesday 13 May 2025 23:50:58 +0000 (0:00:02.474) 0:00:37.376 *********** 2025-05-13 23:55:33.427228 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:55:33.427232 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:55:33.427237 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:55:33.427241 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:55:33.427245 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:55:33.427250 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:55:33.427254 | orchestrator | 2025-05-13 23:55:33.427258 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-05-13 23:55:33.427263 | orchestrator | Tuesday 13 May 2025 23:50:59 +0000 (0:00:01.127) 0:00:38.503 *********** 2025-05-13 23:55:33.427271 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:55:33.427275 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:55:33.427280 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:55:33.427284 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:55:33.427288 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:55:33.427293 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:55:33.427297 | orchestrator | 2025-05-13 23:55:33.427302 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-05-13 23:55:33.427306 | orchestrator | Tuesday 13 May 2025 23:51:02 +0000 (0:00:02.602) 0:00:41.105 *********** 2025-05-13 23:55:33.427313 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-13 23:55:33.427320 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-13 23:55:33.427331 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-13 23:55:33.427337 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-13 23:55:33.427346 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-13 23:55:33.427351 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-13 23:55:33.427355 | orchestrator | 2025-05-13 23:55:33.427360 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-05-13 23:55:33.427365 | orchestrator | Tuesday 13 May 2025 23:51:06 +0000 (0:00:03.562) 0:00:44.668 *********** 2025-05-13 23:55:33.427369 | orchestrator | [WARNING]: Skipped 2025-05-13 23:55:33.427374 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-05-13 23:55:33.427378 | orchestrator | due to this access issue: 2025-05-13 23:55:33.427383 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-05-13 23:55:33.427387 | orchestrator | a directory 2025-05-13 23:55:33.427392 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-13 23:55:33.427397 | orchestrator | 2025-05-13 23:55:33.427401 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-05-13 23:55:33.427405 | orchestrator | Tuesday 13 May 2025 23:51:06 +0000 (0:00:00.640) 0:00:45.308 *********** 2025-05-13 23:55:33.427410 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:55:33.427415 | orchestrator | 2025-05-13 23:55:33.427420 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-05-13 23:55:33.427424 | orchestrator | Tuesday 13 May 2025 23:51:07 +0000 (0:00:01.078) 0:00:46.386 *********** 2025-05-13 23:55:33.427434 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-13 23:55:33.427439 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-13 23:55:33.427448 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-13 23:55:33.427453 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-13 23:55:33.427458 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-13 23:55:33.427468 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-13 23:55:33.427481 | orchestrator | 2025-05-13 23:55:33.427485 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-05-13 23:55:33.427490 | orchestrator | Tuesday 13 May 2025 23:51:11 +0000 (0:00:04.062) 0:00:50.449 *********** 2025-05-13 23:55:33.427495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-13 23:55:33.427499 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:55:33.427504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-13 23:55:33.427509 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:55:33.427513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-13 23:55:33.427518 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:55:33.427525 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-13 23:55:33.427530 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:55:33.427543 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-13 23:55:33.427548 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:55:33.427552 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-13 23:55:33.427557 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:55:33.427561 | orchestrator | 2025-05-13 23:55:33.427566 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-05-13 23:55:33.427571 | orchestrator | Tuesday 13 May 2025 23:51:13 +0000 (0:00:02.021) 0:00:52.471 *********** 2025-05-13 23:55:33.427575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-13 23:55:33.427580 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:55:33.427585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-13 23:55:33.427590 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:55:33.427599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-13 23:55:33.427607 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:55:33.427612 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-13 23:55:33.427617 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:55:33.427621 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-13 23:55:33.427626 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:55:33.427631 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-13 23:55:33.427635 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:55:33.427640 | orchestrator | 2025-05-13 23:55:33.427644 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-05-13 23:55:33.427649 | orchestrator | Tuesday 13 May 2025 23:51:16 +0000 (0:00:02.316) 0:00:54.788 *********** 2025-05-13 23:55:33.427653 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:55:33.427658 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:55:33.427662 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:55:33.427667 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:55:33.427671 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:55:33.427675 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:55:33.427717 | orchestrator | 2025-05-13 23:55:33.427722 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-05-13 23:55:33.427731 | orchestrator | Tuesday 13 May 2025 23:51:18 +0000 (0:00:02.189) 0:00:56.977 *********** 2025-05-13 23:55:33.427735 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:55:33.427740 | orchestrator | 2025-05-13 23:55:33.427744 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-05-13 23:55:33.427749 | orchestrator | Tuesday 13 May 2025 23:51:18 +0000 (0:00:00.131) 0:00:57.108 *********** 2025-05-13 23:55:33.427753 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:55:33.427757 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:55:33.427762 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:55:33.427766 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:55:33.427770 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:55:33.427775 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:55:33.427779 | orchestrator | 2025-05-13 23:55:33.427786 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-05-13 23:55:33.427791 | orchestrator | Tuesday 13 May 2025 23:51:19 +0000 (0:00:00.734) 0:00:57.842 *********** 2025-05-13 23:55:33.427943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-13 23:55:33.427951 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:55:33.427956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-13 23:55:33.427961 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:55:33.427966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-13 23:55:33.427970 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:55:33.427979 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-13 23:55:33.427984 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:55:33.427992 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-13 23:55:33.427996 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:55:33.428005 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-13 23:55:33.428010 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:55:33.428015 | orchestrator | 2025-05-13 23:55:33.428020 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-05-13 23:55:33.428024 | orchestrator | Tuesday 13 May 2025 23:51:22 +0000 (0:00:03.295) 0:01:01.138 *********** 2025-05-13 23:55:33.428029 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-13 23:55:33.428034 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-13 23:55:33.428043 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-13 23:55:33.428050 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-13 23:55:33.428055 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-13 23:55:33.428060 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-13 23:55:33.428065 | orchestrator | 2025-05-13 23:55:33.428069 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-05-13 23:55:33.428074 | orchestrator | Tuesday 13 May 2025 23:51:27 +0000 (0:00:05.336) 0:01:06.474 *********** 2025-05-13 23:55:33.428079 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-13 23:55:33.428111 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-13 23:55:33.428121 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-13 23:55:33.428126 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-13 23:55:33.428131 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-13 23:55:33.428140 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-13 23:55:33.428145 | orchestrator | 2025-05-13 23:55:33.428150 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-05-13 23:55:33.428155 | orchestrator | Tuesday 13 May 2025 23:51:36 +0000 (0:00:08.112) 0:01:14.587 *********** 2025-05-13 23:55:33.428159 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-13 23:55:33.428164 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:55:33.428174 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-13 23:55:33.428179 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:55:33.428183 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-13 23:55:33.428188 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:55:33.428193 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-13 23:55:33.428203 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-13 23:55:33.428210 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-13 23:55:33.428215 | orchestrator | 2025-05-13 23:55:33.428220 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-05-13 23:55:33.428225 | orchestrator | Tuesday 13 May 2025 23:51:38 +0000 (0:00:02.945) 0:01:17.532 *********** 2025-05-13 23:55:33.428229 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:55:33.428234 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:55:33.428238 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:55:33.428243 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:55:33.428247 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:55:33.428252 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:55:33.428257 | orchestrator | 2025-05-13 23:55:33.428263 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-05-13 23:55:33.428268 | orchestrator | Tuesday 13 May 2025 23:51:41 +0000 (0:00:02.522) 0:01:20.055 *********** 2025-05-13 23:55:33.428273 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-13 23:55:33.428278 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:55:33.428282 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-13 23:55:33.428291 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:55:33.428295 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-13 23:55:33.428300 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:55:33.428305 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-13 23:55:33.428315 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-13 23:55:33.428320 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-13 23:55:33.428328 | orchestrator | 2025-05-13 23:55:33.428333 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-05-13 23:55:33.428337 | orchestrator | Tuesday 13 May 2025 23:51:45 +0000 (0:00:03.822) 0:01:23.878 *********** 2025-05-13 23:55:33.428342 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:55:33.428346 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:55:33.428351 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:55:33.428355 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:55:33.428360 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:55:33.428364 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:55:33.428369 | orchestrator | 2025-05-13 23:55:33.428373 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-05-13 23:55:33.428378 | orchestrator | Tuesday 13 May 2025 23:51:47 +0000 (0:00:02.224) 0:01:26.102 *********** 2025-05-13 23:55:33.428382 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:55:33.428387 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:55:33.428391 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:55:33.428396 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:55:33.428400 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:55:33.428404 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:55:33.428409 | orchestrator | 2025-05-13 23:55:33.428413 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-05-13 23:55:33.428418 | orchestrator | Tuesday 13 May 2025 23:51:49 +0000 (0:00:02.207) 0:01:28.309 *********** 2025-05-13 23:55:33.428422 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:55:33.428427 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:55:33.428431 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:55:33.428435 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:55:33.428440 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:55:33.428444 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:55:33.428449 | orchestrator | 2025-05-13 23:55:33.428453 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-05-13 23:55:33.428458 | orchestrator | Tuesday 13 May 2025 23:51:52 +0000 (0:00:02.380) 0:01:30.690 *********** 2025-05-13 23:55:33.428462 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:55:33.428466 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:55:33.428471 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:55:33.428475 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:55:33.428480 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:55:33.428484 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:55:33.428489 | orchestrator | 2025-05-13 23:55:33.428493 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-05-13 23:55:33.428498 | orchestrator | Tuesday 13 May 2025 23:51:54 +0000 (0:00:02.278) 0:01:32.969 *********** 2025-05-13 23:55:33.428521 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:55:33.428526 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:55:33.428530 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:55:33.428535 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:55:33.428539 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:55:33.428543 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:55:33.428548 | orchestrator | 2025-05-13 23:55:33.428552 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-05-13 23:55:33.428557 | orchestrator | Tuesday 13 May 2025 23:51:56 +0000 (0:00:02.199) 0:01:35.169 *********** 2025-05-13 23:55:33.428561 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:55:33.428566 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:55:33.428570 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:55:33.428575 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:55:33.428582 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:55:33.428587 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:55:33.428592 | orchestrator | 2025-05-13 23:55:33.428598 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-05-13 23:55:33.428605 | orchestrator | Tuesday 13 May 2025 23:51:59 +0000 (0:00:03.254) 0:01:38.424 *********** 2025-05-13 23:55:33.428610 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-13 23:55:33.428615 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:55:33.428620 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-13 23:55:33.428626 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:55:33.428631 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-13 23:55:33.428636 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:55:33.428641 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-13 23:55:33.428646 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:55:33.428654 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-13 23:55:33.428659 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:55:33.428664 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-13 23:55:33.428669 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:55:33.428674 | orchestrator | 2025-05-13 23:55:33.428694 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-05-13 23:55:33.428699 | orchestrator | Tuesday 13 May 2025 23:52:02 +0000 (0:00:02.795) 0:01:41.220 *********** 2025-05-13 23:55:33.428705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-13 23:55:33.428710 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:55:33.428716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-13 23:55:33.428722 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:55:33.428728 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-13 23:55:33.428737 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:55:33.428746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-13 23:55:33.428751 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:55:33.428928 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-13 23:55:33.428935 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:55:33.428940 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-13 23:55:33.428945 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:55:33.428950 | orchestrator | 2025-05-13 23:55:33.428954 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-05-13 23:55:33.428959 | orchestrator | Tuesday 13 May 2025 23:52:05 +0000 (0:00:03.313) 0:01:44.533 *********** 2025-05-13 23:55:33.428964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-13 23:55:33.428973 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:55:33.428978 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-13 23:55:33.428982 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:55:33.428994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-13 23:55:33.428999 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:55:33.429004 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-13 23:55:33.429008 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:55:33.429013 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-13 23:55:33.429018 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:55:33.429023 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-13 23:55:33.429030 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:55:33.429035 | orchestrator | 2025-05-13 23:55:33.429040 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-05-13 23:55:33.429044 | orchestrator | Tuesday 13 May 2025 23:52:09 +0000 (0:00:03.084) 0:01:47.618 *********** 2025-05-13 23:55:33.429049 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:55:33.429053 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:55:33.429058 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:55:33.429062 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:55:33.429067 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:55:33.429071 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:55:33.429076 | orchestrator | 2025-05-13 23:55:33.429080 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-05-13 23:55:33.429085 | orchestrator | Tuesday 13 May 2025 23:52:12 +0000 (0:00:03.117) 0:01:50.735 *********** 2025-05-13 23:55:33.429090 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:55:33.429094 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:55:33.429099 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:55:33.429103 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:55:33.429107 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:55:33.429112 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:55:33.429116 | orchestrator | 2025-05-13 23:55:33.429123 | orchestrator | TASK [neutron : Copying over neutron_ovn_vpn_agent.ini] ************************ 2025-05-13 23:55:33.429128 | orchestrator | Tuesday 13 May 2025 23:52:17 +0000 (0:00:05.089) 0:01:55.825 *********** 2025-05-13 23:55:33.429133 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:55:33.429137 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:55:33.429142 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:55:33.429146 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:55:33.429151 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:55:33.429155 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:55:33.429159 | orchestrator | 2025-05-13 23:55:33.429164 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-05-13 23:55:33.429169 | orchestrator | Tuesday 13 May 2025 23:52:19 +0000 (0:00:02.612) 0:01:58.438 *********** 2025-05-13 23:55:33.429173 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:55:33.429180 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:55:33.429185 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:55:33.429189 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:55:33.429193 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:55:33.429198 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:55:33.429202 | orchestrator | 2025-05-13 23:55:33.429207 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-05-13 23:55:33.429212 | orchestrator | Tuesday 13 May 2025 23:52:22 +0000 (0:00:02.618) 0:02:01.057 *********** 2025-05-13 23:55:33.429216 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:55:33.429220 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:55:33.429225 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:55:33.429229 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:55:33.429234 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:55:33.429238 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:55:33.429255 | orchestrator | 2025-05-13 23:55:33.429260 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-05-13 23:55:33.429268 | orchestrator | Tuesday 13 May 2025 23:52:25 +0000 (0:00:03.283) 0:02:04.341 *********** 2025-05-13 23:55:33.429272 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:55:33.429277 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:55:33.429281 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:55:33.429286 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:55:33.429290 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:55:33.429295 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:55:33.429299 | orchestrator | 2025-05-13 23:55:33.429304 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-05-13 23:55:33.429308 | orchestrator | Tuesday 13 May 2025 23:52:29 +0000 (0:00:03.259) 0:02:07.600 *********** 2025-05-13 23:55:33.429313 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:55:33.429317 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:55:33.429322 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:55:33.429326 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:55:33.429331 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:55:33.429335 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:55:33.429339 | orchestrator | 2025-05-13 23:55:33.429344 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-05-13 23:55:33.429348 | orchestrator | Tuesday 13 May 2025 23:52:32 +0000 (0:00:03.119) 0:02:10.720 *********** 2025-05-13 23:55:33.429353 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:55:33.429357 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:55:33.429362 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:55:33.429366 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:55:33.429371 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:55:33.429375 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:55:33.429379 | orchestrator | 2025-05-13 23:55:33.429384 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-05-13 23:55:33.429389 | orchestrator | Tuesday 13 May 2025 23:52:35 +0000 (0:00:02.992) 0:02:13.713 *********** 2025-05-13 23:55:33.429393 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:55:33.429398 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:55:33.429402 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:55:33.429406 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:55:33.429411 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:55:33.429415 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:55:33.429420 | orchestrator | 2025-05-13 23:55:33.429424 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-05-13 23:55:33.429429 | orchestrator | Tuesday 13 May 2025 23:52:38 +0000 (0:00:03.418) 0:02:17.131 *********** 2025-05-13 23:55:33.429433 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:55:33.429438 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:55:33.429442 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:55:33.429447 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:55:33.429451 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:55:33.429455 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:55:33.429460 | orchestrator | 2025-05-13 23:55:33.429464 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-05-13 23:55:33.429469 | orchestrator | Tuesday 13 May 2025 23:52:41 +0000 (0:00:02.589) 0:02:19.721 *********** 2025-05-13 23:55:33.429473 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-13 23:55:33.429478 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:55:33.429482 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-13 23:55:33.429487 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:55:33.429492 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-13 23:55:33.429496 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:55:33.429501 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-13 23:55:33.429508 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:55:33.429513 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-13 23:55:33.429517 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:55:33.429522 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-13 23:55:33.429530 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:55:33.429535 | orchestrator | 2025-05-13 23:55:33.429540 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-05-13 23:55:33.429544 | orchestrator | Tuesday 13 May 2025 23:52:44 +0000 (0:00:03.791) 0:02:23.512 *********** 2025-05-13 23:55:33.429553 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-13 23:55:33.429558 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:55:33.429563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-13 23:55:33.429567 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:55:33.429573 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-13 23:55:33.429578 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:55:33.429584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-13 23:55:33.429593 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:55:33.429601 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-13 23:55:33.429606 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:55:33.429615 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-13 23:55:33.429620 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:55:33.429625 | orchestrator | 2025-05-13 23:55:33.429630 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-05-13 23:55:33.429636 | orchestrator | Tuesday 13 May 2025 23:52:48 +0000 (0:00:03.442) 0:02:26.955 *********** 2025-05-13 23:55:33.429641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-13 23:55:33.429647 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-13 23:55:33.429657 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-13 23:55:33.429668 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-13 23:55:33.429674 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-13 23:55:33.429712 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-13 23:55:33.429717 | orchestrator | 2025-05-13 23:55:33.429722 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-05-13 23:55:33.429726 | orchestrator | Tuesday 13 May 2025 23:52:52 +0000 (0:00:03.702) 0:02:30.657 *********** 2025-05-13 23:55:33.429731 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:55:33.429736 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:55:33.429740 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:55:33.429745 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:55:33.429749 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:55:33.429754 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:55:33.429758 | orchestrator | 2025-05-13 23:55:33.429766 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-05-13 23:55:33.429771 | orchestrator | Tuesday 13 May 2025 23:52:52 +0000 (0:00:00.546) 0:02:31.204 *********** 2025-05-13 23:55:33.429776 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:55:33.429780 | orchestrator | 2025-05-13 23:55:33.429784 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-05-13 23:55:33.429789 | orchestrator | Tuesday 13 May 2025 23:52:54 +0000 (0:00:02.047) 0:02:33.251 *********** 2025-05-13 23:55:33.429793 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:55:33.429798 | orchestrator | 2025-05-13 23:55:33.429802 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-05-13 23:55:33.429807 | orchestrator | Tuesday 13 May 2025 23:52:56 +0000 (0:00:02.127) 0:02:35.378 *********** 2025-05-13 23:55:33.429811 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:55:33.429815 | orchestrator | 2025-05-13 23:55:33.429820 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-13 23:55:33.429824 | orchestrator | Tuesday 13 May 2025 23:53:37 +0000 (0:00:41.152) 0:03:16.531 *********** 2025-05-13 23:55:33.429829 | orchestrator | 2025-05-13 23:55:33.429833 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-13 23:55:33.429838 | orchestrator | Tuesday 13 May 2025 23:53:38 +0000 (0:00:00.325) 0:03:16.856 *********** 2025-05-13 23:55:33.429842 | orchestrator | 2025-05-13 23:55:33.429847 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-13 23:55:33.429851 | orchestrator | Tuesday 13 May 2025 23:53:38 +0000 (0:00:00.066) 0:03:16.923 *********** 2025-05-13 23:55:33.429855 | orchestrator | 2025-05-13 23:55:33.429860 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-13 23:55:33.429864 | orchestrator | Tuesday 13 May 2025 23:53:38 +0000 (0:00:00.081) 0:03:17.004 *********** 2025-05-13 23:55:33.429868 | orchestrator | 2025-05-13 23:55:33.429873 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-13 23:55:33.429878 | orchestrator | Tuesday 13 May 2025 23:53:38 +0000 (0:00:00.168) 0:03:17.173 *********** 2025-05-13 23:55:33.429882 | orchestrator | 2025-05-13 23:55:33.429886 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-13 23:55:33.429894 | orchestrator | Tuesday 13 May 2025 23:53:38 +0000 (0:00:00.212) 0:03:17.385 *********** 2025-05-13 23:55:33.429898 | orchestrator | 2025-05-13 23:55:33.429903 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-05-13 23:55:33.429907 | orchestrator | Tuesday 13 May 2025 23:53:38 +0000 (0:00:00.104) 0:03:17.490 *********** 2025-05-13 23:55:33.429912 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:55:33.429916 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:55:33.429921 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:55:33.429925 | orchestrator | 2025-05-13 23:55:33.429930 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-05-13 23:55:33.429934 | orchestrator | Tuesday 13 May 2025 23:54:11 +0000 (0:00:32.499) 0:03:49.990 *********** 2025-05-13 23:55:33.429939 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:55:33.429946 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:55:33.429950 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:55:33.429955 | orchestrator | 2025-05-13 23:55:33.429959 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 23:55:33.429964 | orchestrator | testbed-node-0 : ok=27  changed=16  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-13 23:55:33.429969 | orchestrator | testbed-node-1 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-05-13 23:55:33.429974 | orchestrator | testbed-node-2 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-05-13 23:55:33.429978 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-05-13 23:55:33.429986 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-05-13 23:55:33.429991 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-05-13 23:55:33.429996 | orchestrator | 2025-05-13 23:55:33.430000 | orchestrator | 2025-05-13 23:55:33.430005 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 23:55:33.430009 | orchestrator | Tuesday 13 May 2025 23:55:32 +0000 (0:01:20.757) 0:05:10.747 *********** 2025-05-13 23:55:33.430056 | orchestrator | =============================================================================== 2025-05-13 23:55:33.430060 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 80.76s 2025-05-13 23:55:33.430115 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 41.15s 2025-05-13 23:55:33.430122 | orchestrator | neutron : Restart neutron-server container ----------------------------- 32.50s 2025-05-13 23:55:33.430126 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 8.11s 2025-05-13 23:55:33.430131 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.64s 2025-05-13 23:55:33.430135 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.34s 2025-05-13 23:55:33.430139 | orchestrator | neutron : Copying over config.json files for services ------------------- 5.34s 2025-05-13 23:55:33.430144 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 5.09s 2025-05-13 23:55:33.430148 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 4.06s 2025-05-13 23:55:33.430152 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.91s 2025-05-13 23:55:33.430157 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 3.82s 2025-05-13 23:55:33.430161 | orchestrator | neutron : Copying over neutron-tls-proxy.cfg ---------------------------- 3.79s 2025-05-13 23:55:33.430166 | orchestrator | neutron : Check neutron containers -------------------------------------- 3.70s 2025-05-13 23:55:33.430170 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 3.56s 2025-05-13 23:55:33.430174 | orchestrator | neutron : Copying over neutron_taas.conf -------------------------------- 3.44s 2025-05-13 23:55:33.430179 | orchestrator | neutron : Copy neutron-l3-agent-wrapper script -------------------------- 3.42s 2025-05-13 23:55:33.430183 | orchestrator | neutron : Copying over l3_agent.ini ------------------------------------- 3.31s 2025-05-13 23:55:33.430187 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.30s 2025-05-13 23:55:33.430192 | orchestrator | neutron : Copying over existing policy file ----------------------------- 3.30s 2025-05-13 23:55:33.430196 | orchestrator | neutron : Copying over ironic_neutron_agent.ini ------------------------- 3.28s 2025-05-13 23:55:33.430332 | orchestrator | 2025-05-13 23:55:33 | INFO  | Task 56cb7757-54de-4efc-b413-7ace963468ec is in state STARTED 2025-05-13 23:55:33.430423 | orchestrator | 2025-05-13 23:55:33 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:55:36.463659 | orchestrator | 2025-05-13 23:55:36 | INFO  | Task f39010f6-7d7e-490b-86b3-c5bd2074fe64 is in state STARTED 2025-05-13 23:55:36.465658 | orchestrator | 2025-05-13 23:55:36 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:55:36.468491 | orchestrator | 2025-05-13 23:55:36 | INFO  | Task 56cb7757-54de-4efc-b413-7ace963468ec is in state STARTED 2025-05-13 23:55:36.470698 | orchestrator | 2025-05-13 23:55:36 | INFO  | Task 3e529a46-4583-4a82-97c2-13356d78342d is in state STARTED 2025-05-13 23:55:36.471023 | orchestrator | 2025-05-13 23:55:36 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:55:39.517963 | orchestrator | 2025-05-13 23:55:39 | INFO  | Task f39010f6-7d7e-490b-86b3-c5bd2074fe64 is in state STARTED 2025-05-13 23:55:39.525581 | orchestrator | 2025-05-13 23:55:39 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:55:39.525692 | orchestrator | 2025-05-13 23:55:39 | INFO  | Task 56cb7757-54de-4efc-b413-7ace963468ec is in state STARTED 2025-05-13 23:55:39.525703 | orchestrator | 2025-05-13 23:55:39 | INFO  | Task 3e529a46-4583-4a82-97c2-13356d78342d is in state STARTED 2025-05-13 23:55:39.525712 | orchestrator | 2025-05-13 23:55:39 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:55:42.573615 | orchestrator | 2025-05-13 23:55:42 | INFO  | Task f39010f6-7d7e-490b-86b3-c5bd2074fe64 is in state STARTED 2025-05-13 23:55:42.573757 | orchestrator | 2025-05-13 23:55:42 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:55:42.573766 | orchestrator | 2025-05-13 23:55:42 | INFO  | Task 56cb7757-54de-4efc-b413-7ace963468ec is in state STARTED 2025-05-13 23:55:42.575634 | orchestrator | 2025-05-13 23:55:42 | INFO  | Task 3e529a46-4583-4a82-97c2-13356d78342d is in state STARTED 2025-05-13 23:55:42.575656 | orchestrator | 2025-05-13 23:55:42 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:55:45.608536 | orchestrator | 2025-05-13 23:55:45 | INFO  | Task f39010f6-7d7e-490b-86b3-c5bd2074fe64 is in state STARTED 2025-05-13 23:55:45.609888 | orchestrator | 2025-05-13 23:55:45 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:55:45.613581 | orchestrator | 2025-05-13 23:55:45 | INFO  | Task 56cb7757-54de-4efc-b413-7ace963468ec is in state STARTED 2025-05-13 23:55:45.614186 | orchestrator | 2025-05-13 23:55:45 | INFO  | Task 3e529a46-4583-4a82-97c2-13356d78342d is in state STARTED 2025-05-13 23:55:45.614205 | orchestrator | 2025-05-13 23:55:45 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:55:48.656456 | orchestrator | 2025-05-13 23:55:48 | INFO  | Task f39010f6-7d7e-490b-86b3-c5bd2074fe64 is in state STARTED 2025-05-13 23:55:48.657517 | orchestrator | 2025-05-13 23:55:48 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:55:48.658768 | orchestrator | 2025-05-13 23:55:48 | INFO  | Task 56cb7757-54de-4efc-b413-7ace963468ec is in state STARTED 2025-05-13 23:55:48.660280 | orchestrator | 2025-05-13 23:55:48 | INFO  | Task 3e529a46-4583-4a82-97c2-13356d78342d is in state STARTED 2025-05-13 23:55:48.660330 | orchestrator | 2025-05-13 23:55:48 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:55:51.702557 | orchestrator | 2025-05-13 23:55:51 | INFO  | Task f39010f6-7d7e-490b-86b3-c5bd2074fe64 is in state SUCCESS 2025-05-13 23:55:51.704075 | orchestrator | 2025-05-13 23:55:51.704118 | orchestrator | 2025-05-13 23:55:51.704126 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-13 23:55:51.704132 | orchestrator | 2025-05-13 23:55:51.704139 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-13 23:55:51.704145 | orchestrator | Tuesday 13 May 2025 23:53:00 +0000 (0:00:00.267) 0:00:00.267 *********** 2025-05-13 23:55:51.704151 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:55:51.704158 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:55:51.704164 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:55:51.704170 | orchestrator | 2025-05-13 23:55:51.704176 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-13 23:55:51.704182 | orchestrator | Tuesday 13 May 2025 23:53:00 +0000 (0:00:00.307) 0:00:00.575 *********** 2025-05-13 23:55:51.704188 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-05-13 23:55:51.704195 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-05-13 23:55:51.704200 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-05-13 23:55:51.704228 | orchestrator | 2025-05-13 23:55:51.704234 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-05-13 23:55:51.704239 | orchestrator | 2025-05-13 23:55:51.704245 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-05-13 23:55:51.704251 | orchestrator | Tuesday 13 May 2025 23:53:01 +0000 (0:00:00.458) 0:00:01.033 *********** 2025-05-13 23:55:51.704256 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:55:51.704263 | orchestrator | 2025-05-13 23:55:51.704269 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-05-13 23:55:51.704275 | orchestrator | Tuesday 13 May 2025 23:53:01 +0000 (0:00:00.594) 0:00:01.628 *********** 2025-05-13 23:55:51.704280 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-05-13 23:55:51.704286 | orchestrator | 2025-05-13 23:55:51.704292 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-05-13 23:55:51.704297 | orchestrator | Tuesday 13 May 2025 23:53:05 +0000 (0:00:03.346) 0:00:04.974 *********** 2025-05-13 23:55:51.704315 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-05-13 23:55:51.704322 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-05-13 23:55:51.704328 | orchestrator | 2025-05-13 23:55:51.704334 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-05-13 23:55:51.704340 | orchestrator | Tuesday 13 May 2025 23:53:11 +0000 (0:00:06.096) 0:00:11.070 *********** 2025-05-13 23:55:51.704346 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-13 23:55:51.704351 | orchestrator | 2025-05-13 23:55:51.704357 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-05-13 23:55:51.704363 | orchestrator | Tuesday 13 May 2025 23:53:14 +0000 (0:00:03.205) 0:00:14.276 *********** 2025-05-13 23:55:51.704368 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-13 23:55:51.704374 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-05-13 23:55:51.704379 | orchestrator | 2025-05-13 23:55:51.704385 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-05-13 23:55:51.704391 | orchestrator | Tuesday 13 May 2025 23:53:18 +0000 (0:00:03.454) 0:00:17.730 *********** 2025-05-13 23:55:51.704396 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-13 23:55:51.704402 | orchestrator | 2025-05-13 23:55:51.704408 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-05-13 23:55:51.704413 | orchestrator | Tuesday 13 May 2025 23:53:21 +0000 (0:00:03.472) 0:00:21.203 *********** 2025-05-13 23:55:51.704419 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-05-13 23:55:51.704425 | orchestrator | 2025-05-13 23:55:51.704430 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-05-13 23:55:51.704436 | orchestrator | Tuesday 13 May 2025 23:53:25 +0000 (0:00:04.407) 0:00:25.611 *********** 2025-05-13 23:55:51.704444 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-13 23:55:51.704466 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-13 23:55:51.704478 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-13 23:55:51.704489 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-13 23:55:51.704497 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-13 23:55:51.704504 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-13 23:55:51.704515 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-13 23:55:51.704540 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-13 23:55:51.704551 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-13 23:55:51.704567 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-13 23:55:51.704580 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-13 23:55:51.704591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-13 23:55:51.704601 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-13 23:55:51.704612 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-13 23:55:51.704629 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-13 23:55:51.704635 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-13 23:55:51.704644 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-13 23:55:51.704682 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-13 23:55:51.704689 | orchestrator | 2025-05-13 23:55:51.704696 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-05-13 23:55:51.704702 | orchestrator | Tuesday 13 May 2025 23:53:28 +0000 (0:00:02.897) 0:00:28.509 *********** 2025-05-13 23:55:51.704709 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:55:51.704716 | orchestrator | 2025-05-13 23:55:51.704722 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-05-13 23:55:51.704728 | orchestrator | Tuesday 13 May 2025 23:53:28 +0000 (0:00:00.145) 0:00:28.654 *********** 2025-05-13 23:55:51.704735 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:55:51.704742 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:55:51.704748 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:55:51.704754 | orchestrator | 2025-05-13 23:55:51.704761 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-05-13 23:55:51.704768 | orchestrator | Tuesday 13 May 2025 23:53:29 +0000 (0:00:00.679) 0:00:29.334 *********** 2025-05-13 23:55:51.704774 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:55:51.704786 | orchestrator | 2025-05-13 23:55:51.704792 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-05-13 23:55:51.704798 | orchestrator | Tuesday 13 May 2025 23:53:30 +0000 (0:00:00.835) 0:00:30.169 *********** 2025-05-13 23:55:51.704805 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-13 23:55:51.704817 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-13 23:55:51.704827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-13 23:55:51.704835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-13 23:55:51.704842 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-13 23:55:51.704852 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-13 23:55:51.704859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-13 23:55:51.704869 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-13 23:55:51.704876 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-13 23:55:51.704887 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-13 23:55:51.704893 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-13 23:55:51.704899 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-13 23:55:51.704909 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-13 23:55:51.704918 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-13 23:55:51.704924 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-13 23:55:51.704930 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-13 23:55:51.704939 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-13 23:55:51.704945 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-13 23:55:51.704959 | orchestrator | 2025-05-13 23:55:51.704965 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-05-13 23:55:51.704971 | orchestrator | Tuesday 13 May 2025 23:53:36 +0000 (0:00:06.251) 0:00:36.421 *********** 2025-05-13 23:55:51.704977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-13 23:55:51.704983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-13 23:55:51.704993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-13 23:55:51.704999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-13 23:55:51.705008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-13 23:55:51.705015 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-13 23:55:51.705025 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:55:51.705031 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-13 23:55:51.705037 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-13 23:55:51.705189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-13 23:55:51.705198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-13 23:55:51.705207 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-13 23:55:51.705214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-13 23:55:51.705225 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:55:51.705231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-13 23:55:51.705237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-13 23:55:51.705249 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-13 23:55:51.705255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-13 23:55:51.705261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-13 23:55:51.705271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-13 23:55:51.705281 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:55:51.705287 | orchestrator | 2025-05-13 23:55:51.705292 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-05-13 23:55:51.705298 | orchestrator | Tuesday 13 May 2025 23:53:37 +0000 (0:00:01.121) 0:00:37.543 *********** 2025-05-13 23:55:51.705304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-13 23:55:51.705310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-13 23:55:51.705319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-13 23:55:51.705325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-13 23:55:51.705332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-13 23:55:51.705344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-13 23:55:51.705351 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:55:51.705357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-13 23:55:51.705363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-13 23:55:51.705372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-13 23:55:51.705378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-13 23:55:51.705384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-13 23:55:51.705397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-13 23:55:51.705404 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:55:51.705410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-13 23:55:51.705416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-13 23:55:51.705422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-13 23:55:51.705432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-13 23:55:51.705438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-13 23:55:51.705451 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-13 23:55:51.705457 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:55:51.705463 | orchestrator | 2025-05-13 23:55:51.705468 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-05-13 23:55:51.705474 | orchestrator | Tuesday 13 May 2025 23:53:39 +0000 (0:00:02.079) 0:00:39.623 *********** 2025-05-13 23:55:51.705480 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-13 23:55:51.705487 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-13 23:55:51.705497 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-13 23:55:51.705503 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-13 23:55:51.705517 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-13 23:55:51.705528 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-13 23:55:51.705537 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-13 23:55:51.705547 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-13 23:55:51.705561 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-13 23:55:51.705571 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-13 23:55:51.705590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-13 23:55:51.705603 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-13 23:55:51.705614 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-13 23:55:51.705623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-13 23:55:51.705635 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-13 23:55:51.705645 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-13 23:55:51.705669 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-13 23:55:51.705683 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-13 23:55:51.705689 | orchestrator | 2025-05-13 23:55:51.705695 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-05-13 23:55:51.705701 | orchestrator | Tuesday 13 May 2025 23:53:46 +0000 (0:00:06.922) 0:00:46.545 *********** 2025-05-13 23:55:51.705710 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-13 23:55:51.705716 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-13 23:55:51.705723 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-13 23:55:51.705959 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-13 23:55:51.705975 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-13 23:55:51.705985 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-13 23:55:51.705991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-13 23:55:51.705997 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-13 23:55:51.706003 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-13 23:55:51.706053 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-13 23:55:51.706067 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-13 23:55:51.706074 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-13 23:55:51.706083 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-13 23:55:51.706090 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-13 23:55:51.706096 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-13 23:55:51.706102 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-13 23:55:51.706113 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-13 23:55:51.706151 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-13 23:55:51.706163 | orchestrator | 2025-05-13 23:55:51.706174 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-05-13 23:55:51.706185 | orchestrator | Tuesday 13 May 2025 23:54:02 +0000 (0:00:15.648) 0:01:02.193 *********** 2025-05-13 23:55:51.706196 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-05-13 23:55:51.706208 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-05-13 23:55:51.706218 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-05-13 23:55:51.706229 | orchestrator | 2025-05-13 23:55:51.706239 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-05-13 23:55:51.706249 | orchestrator | Tuesday 13 May 2025 23:54:07 +0000 (0:00:05.158) 0:01:07.352 *********** 2025-05-13 23:55:51.706259 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-05-13 23:55:51.706275 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-05-13 23:55:51.706287 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-05-13 23:55:51.706299 | orchestrator | 2025-05-13 23:55:51.706310 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-05-13 23:55:51.706322 | orchestrator | Tuesday 13 May 2025 23:54:10 +0000 (0:00:03.273) 0:01:10.626 *********** 2025-05-13 23:55:51.706333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-13 23:55:51.706346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-13 23:55:51.706374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-13 23:55:51.706387 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-13 23:55:51.706405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-13 23:55:51.706416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-13 23:55:51.706428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-13 23:55:51.706440 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-13 23:55:51.706459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-13 23:55:51.706476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-13 23:55:51.706487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-13 23:55:51.706503 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-13 23:55:51.706515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-13 23:55:51.706526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-13 23:55:51.706546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-13 23:55:51.706557 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-13 23:55:51.706573 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-13 23:55:51.706586 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-13 23:55:51.706598 | orchestrator | 2025-05-13 23:55:51.706609 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-05-13 23:55:51.706622 | orchestrator | Tuesday 13 May 2025 23:54:15 +0000 (0:00:04.638) 0:01:15.264 *********** 2025-05-13 23:55:51.706644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-13 23:55:51.706688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-13 23:55:51.706709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-13 23:55:51.706728 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-13 23:55:51.706740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-13 23:55:51.706758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-13 23:55:51.706770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-13 23:55:51.706782 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-13 23:55:51.706801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-13 23:55:51.706813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-13 23:55:51.706831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-13 23:55:51.706845 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-13 23:55:51.706860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-13 23:55:51.706872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-13 23:55:51.706890 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-13 23:55:51.706900 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-13 23:55:51.706916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-13 23:55:51.706929 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-13 23:55:51.706940 | orchestrator | 2025-05-13 23:55:51.706950 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-05-13 23:55:51.706960 | orchestrator | Tuesday 13 May 2025 23:54:18 +0000 (0:00:02.834) 0:01:18.098 *********** 2025-05-13 23:55:51.706971 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:55:51.706983 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:55:51.706993 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:55:51.707003 | orchestrator | 2025-05-13 23:55:51.707012 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-05-13 23:55:51.707023 | orchestrator | Tuesday 13 May 2025 23:54:19 +0000 (0:00:00.746) 0:01:18.845 *********** 2025-05-13 23:55:51.707038 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-13 23:55:51.707057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-13 23:55:51.707069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-13 23:55:51.707081 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-13 23:55:51.707099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-13 23:55:51.707111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-13 23:55:51.707125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-13 23:55:51.707142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-13 23:55:51.707153 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:55:51.707164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-13 23:55:51.707174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-13 23:55:51.707190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-13 23:55:51.707201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-13 23:55:51.707215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-13 23:55:51.707232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-13 23:55:51.707243 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:55:51.707254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-13 23:55:51.707265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-13 23:55:51.707275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-13 23:55:51.707292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-13 23:55:51.707303 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:55:51.707314 | orchestrator | 2025-05-13 23:55:51.707325 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-05-13 23:55:51.707336 | orchestrator | Tuesday 13 May 2025 23:54:20 +0000 (0:00:01.540) 0:01:20.385 *********** 2025-05-13 23:55:51.707352 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-13 23:55:51.707371 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-13 23:55:51.707382 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-13 23:55:51.707393 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-13 23:55:51.707411 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-13 23:55:51.707421 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-13 23:55:51.707445 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-13 23:55:51.707456 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-13 23:55:51.707467 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-13 23:55:51.707479 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-13 23:55:51.707495 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-13 23:55:51.707505 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-13 23:55:51.707515 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-13 23:55:51.707536 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-13 23:55:51.707547 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-13 23:55:51.707558 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-13 23:55:51.707568 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-13 23:55:51.707584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-13 23:55:51.707595 | orchestrator | 2025-05-13 23:55:51.707605 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-05-13 23:55:51.707616 | orchestrator | Tuesday 13 May 2025 23:54:26 +0000 (0:00:05.305) 0:01:25.691 *********** 2025-05-13 23:55:51.707626 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:55:51.707636 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:55:51.707646 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:55:51.707715 | orchestrator | 2025-05-13 23:55:51.707727 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-05-13 23:55:51.707745 | orchestrator | Tuesday 13 May 2025 23:54:26 +0000 (0:00:00.282) 0:01:25.973 *********** 2025-05-13 23:55:51.707756 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-05-13 23:55:51.707766 | orchestrator | 2025-05-13 23:55:51.707775 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-05-13 23:55:51.707785 | orchestrator | Tuesday 13 May 2025 23:54:29 +0000 (0:00:02.943) 0:01:28.916 *********** 2025-05-13 23:55:51.707795 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-13 23:55:51.707805 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-05-13 23:55:51.707816 | orchestrator | 2025-05-13 23:55:51.707826 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-05-13 23:55:51.707836 | orchestrator | Tuesday 13 May 2025 23:54:32 +0000 (0:00:02.811) 0:01:31.728 *********** 2025-05-13 23:55:51.707846 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:55:51.707856 | orchestrator | 2025-05-13 23:55:51.707866 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-05-13 23:55:51.707876 | orchestrator | Tuesday 13 May 2025 23:54:46 +0000 (0:00:14.641) 0:01:46.369 *********** 2025-05-13 23:55:51.707886 | orchestrator | 2025-05-13 23:55:51.707896 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-05-13 23:55:51.707907 | orchestrator | Tuesday 13 May 2025 23:54:46 +0000 (0:00:00.069) 0:01:46.439 *********** 2025-05-13 23:55:51.707917 | orchestrator | 2025-05-13 23:55:51.707933 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-05-13 23:55:51.707944 | orchestrator | Tuesday 13 May 2025 23:54:46 +0000 (0:00:00.070) 0:01:46.510 *********** 2025-05-13 23:55:51.707953 | orchestrator | 2025-05-13 23:55:51.707964 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-05-13 23:55:51.707974 | orchestrator | Tuesday 13 May 2025 23:54:46 +0000 (0:00:00.068) 0:01:46.578 *********** 2025-05-13 23:55:51.707988 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:55:51.707999 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:55:51.708010 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:55:51.708021 | orchestrator | 2025-05-13 23:55:51.708031 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-05-13 23:55:51.708042 | orchestrator | Tuesday 13 May 2025 23:54:59 +0000 (0:00:12.733) 0:01:59.312 *********** 2025-05-13 23:55:51.708053 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:55:51.708063 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:55:51.708073 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:55:51.708084 | orchestrator | 2025-05-13 23:55:51.708095 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-05-13 23:55:51.708105 | orchestrator | Tuesday 13 May 2025 23:55:05 +0000 (0:00:05.706) 0:02:05.018 *********** 2025-05-13 23:55:51.708116 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:55:51.708126 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:55:51.708136 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:55:51.708147 | orchestrator | 2025-05-13 23:55:51.708157 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-05-13 23:55:51.708167 | orchestrator | Tuesday 13 May 2025 23:55:12 +0000 (0:00:07.605) 0:02:12.623 *********** 2025-05-13 23:55:51.708178 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:55:51.708187 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:55:51.708198 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:55:51.708208 | orchestrator | 2025-05-13 23:55:51.708219 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-05-13 23:55:51.708229 | orchestrator | Tuesday 13 May 2025 23:55:19 +0000 (0:00:06.395) 0:02:19.019 *********** 2025-05-13 23:55:51.708240 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:55:51.708250 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:55:51.708260 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:55:51.708270 | orchestrator | 2025-05-13 23:55:51.708280 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-05-13 23:55:51.708299 | orchestrator | Tuesday 13 May 2025 23:55:31 +0000 (0:00:12.263) 0:02:31.283 *********** 2025-05-13 23:55:51.708309 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:55:51.708320 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:55:51.708330 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:55:51.708340 | orchestrator | 2025-05-13 23:55:51.708351 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-05-13 23:55:51.708361 | orchestrator | Tuesday 13 May 2025 23:55:43 +0000 (0:00:11.829) 0:02:43.112 *********** 2025-05-13 23:55:51.708371 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:55:51.708382 | orchestrator | 2025-05-13 23:55:51.708392 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 23:55:51.708404 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-13 23:55:51.708416 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-13 23:55:51.708426 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-13 23:55:51.708436 | orchestrator | 2025-05-13 23:55:51.708447 | orchestrator | 2025-05-13 23:55:51.708467 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 23:55:51.708478 | orchestrator | Tuesday 13 May 2025 23:55:50 +0000 (0:00:07.252) 0:02:50.364 *********** 2025-05-13 23:55:51.708488 | orchestrator | =============================================================================== 2025-05-13 23:55:51.708497 | orchestrator | designate : Copying over designate.conf -------------------------------- 15.65s 2025-05-13 23:55:51.708506 | orchestrator | designate : Running Designate bootstrap container ---------------------- 14.64s 2025-05-13 23:55:51.708516 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 12.73s 2025-05-13 23:55:51.708526 | orchestrator | designate : Restart designate-mdns container --------------------------- 12.26s 2025-05-13 23:55:51.708535 | orchestrator | designate : Restart designate-worker container ------------------------- 11.83s 2025-05-13 23:55:51.708545 | orchestrator | designate : Restart designate-central container ------------------------- 7.61s 2025-05-13 23:55:51.708556 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.25s 2025-05-13 23:55:51.708564 | orchestrator | designate : Copying over config.json files for services ----------------- 6.92s 2025-05-13 23:55:51.708575 | orchestrator | designate : Restart designate-producer container ------------------------ 6.40s 2025-05-13 23:55:51.708585 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.25s 2025-05-13 23:55:51.708595 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.10s 2025-05-13 23:55:51.708606 | orchestrator | designate : Restart designate-api container ----------------------------- 5.71s 2025-05-13 23:55:51.708616 | orchestrator | designate : Check designate containers ---------------------------------- 5.31s 2025-05-13 23:55:51.708626 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 5.16s 2025-05-13 23:55:51.708637 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 4.64s 2025-05-13 23:55:51.708646 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.41s 2025-05-13 23:55:51.708688 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.47s 2025-05-13 23:55:51.708699 | orchestrator | service-ks-register : designate | Creating users ------------------------ 3.45s 2025-05-13 23:55:51.708708 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.35s 2025-05-13 23:55:51.708718 | orchestrator | designate : Copying over named.conf ------------------------------------- 3.27s 2025-05-13 23:55:51.708954 | orchestrator | 2025-05-13 23:55:51 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:55:51.708984 | orchestrator | 2025-05-13 23:55:51 | INFO  | Task 56cb7757-54de-4efc-b413-7ace963468ec is in state STARTED 2025-05-13 23:55:51.716395 | orchestrator | 2025-05-13 23:55:51 | INFO  | Task 3e529a46-4583-4a82-97c2-13356d78342d is in state STARTED 2025-05-13 23:55:51.716901 | orchestrator | 2025-05-13 23:55:51 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:55:54.772560 | orchestrator | 2025-05-13 23:55:54 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:55:54.775536 | orchestrator | 2025-05-13 23:55:54 | INFO  | Task 67556688-89f1-41e6-901c-2ee1d629e1de is in state STARTED 2025-05-13 23:55:54.777262 | orchestrator | 2025-05-13 23:55:54 | INFO  | Task 56cb7757-54de-4efc-b413-7ace963468ec is in state STARTED 2025-05-13 23:55:54.779300 | orchestrator | 2025-05-13 23:55:54 | INFO  | Task 3e529a46-4583-4a82-97c2-13356d78342d is in state STARTED 2025-05-13 23:55:54.779486 | orchestrator | 2025-05-13 23:55:54 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:55:57.832727 | orchestrator | 2025-05-13 23:55:57 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:55:57.835158 | orchestrator | 2025-05-13 23:55:57 | INFO  | Task 67556688-89f1-41e6-901c-2ee1d629e1de is in state STARTED 2025-05-13 23:55:57.836322 | orchestrator | 2025-05-13 23:55:57 | INFO  | Task 56cb7757-54de-4efc-b413-7ace963468ec is in state STARTED 2025-05-13 23:55:57.839609 | orchestrator | 2025-05-13 23:55:57 | INFO  | Task 3e529a46-4583-4a82-97c2-13356d78342d is in state STARTED 2025-05-13 23:55:57.839892 | orchestrator | 2025-05-13 23:55:57 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:56:00.895479 | orchestrator | 2025-05-13 23:56:00 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:56:00.898385 | orchestrator | 2025-05-13 23:56:00 | INFO  | Task 67556688-89f1-41e6-901c-2ee1d629e1de is in state STARTED 2025-05-13 23:56:00.900844 | orchestrator | 2025-05-13 23:56:00 | INFO  | Task 56cb7757-54de-4efc-b413-7ace963468ec is in state STARTED 2025-05-13 23:56:00.903067 | orchestrator | 2025-05-13 23:56:00 | INFO  | Task 3e529a46-4583-4a82-97c2-13356d78342d is in state STARTED 2025-05-13 23:56:00.903766 | orchestrator | 2025-05-13 23:56:00 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:56:03.959852 | orchestrator | 2025-05-13 23:56:03 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:56:03.961648 | orchestrator | 2025-05-13 23:56:03 | INFO  | Task 67556688-89f1-41e6-901c-2ee1d629e1de is in state STARTED 2025-05-13 23:56:03.964351 | orchestrator | 2025-05-13 23:56:03 | INFO  | Task 56cb7757-54de-4efc-b413-7ace963468ec is in state STARTED 2025-05-13 23:56:03.966248 | orchestrator | 2025-05-13 23:56:03 | INFO  | Task 3e529a46-4583-4a82-97c2-13356d78342d is in state STARTED 2025-05-13 23:56:03.966277 | orchestrator | 2025-05-13 23:56:03 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:56:07.007377 | orchestrator | 2025-05-13 23:56:07 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:56:07.009709 | orchestrator | 2025-05-13 23:56:07 | INFO  | Task 67556688-89f1-41e6-901c-2ee1d629e1de is in state STARTED 2025-05-13 23:56:07.011143 | orchestrator | 2025-05-13 23:56:07 | INFO  | Task 56cb7757-54de-4efc-b413-7ace963468ec is in state STARTED 2025-05-13 23:56:07.013101 | orchestrator | 2025-05-13 23:56:07 | INFO  | Task 3e529a46-4583-4a82-97c2-13356d78342d is in state STARTED 2025-05-13 23:56:07.013133 | orchestrator | 2025-05-13 23:56:07 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:56:10.055580 | orchestrator | 2025-05-13 23:56:10 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:56:10.057019 | orchestrator | 2025-05-13 23:56:10 | INFO  | Task 67556688-89f1-41e6-901c-2ee1d629e1de is in state STARTED 2025-05-13 23:56:10.059043 | orchestrator | 2025-05-13 23:56:10 | INFO  | Task 56cb7757-54de-4efc-b413-7ace963468ec is in state STARTED 2025-05-13 23:56:10.060115 | orchestrator | 2025-05-13 23:56:10 | INFO  | Task 3e529a46-4583-4a82-97c2-13356d78342d is in state STARTED 2025-05-13 23:56:10.060303 | orchestrator | 2025-05-13 23:56:10 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:56:13.102378 | orchestrator | 2025-05-13 23:56:13 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:56:13.104761 | orchestrator | 2025-05-13 23:56:13 | INFO  | Task 67556688-89f1-41e6-901c-2ee1d629e1de is in state STARTED 2025-05-13 23:56:13.106688 | orchestrator | 2025-05-13 23:56:13 | INFO  | Task 56cb7757-54de-4efc-b413-7ace963468ec is in state STARTED 2025-05-13 23:56:13.108180 | orchestrator | 2025-05-13 23:56:13 | INFO  | Task 3e529a46-4583-4a82-97c2-13356d78342d is in state STARTED 2025-05-13 23:56:13.108228 | orchestrator | 2025-05-13 23:56:13 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:56:16.146796 | orchestrator | 2025-05-13 23:56:16 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:56:16.146917 | orchestrator | 2025-05-13 23:56:16 | INFO  | Task 67556688-89f1-41e6-901c-2ee1d629e1de is in state STARTED 2025-05-13 23:56:16.147669 | orchestrator | 2025-05-13 23:56:16 | INFO  | Task 56cb7757-54de-4efc-b413-7ace963468ec is in state SUCCESS 2025-05-13 23:56:16.148680 | orchestrator | 2025-05-13 23:56:16.148711 | orchestrator | 2025-05-13 23:56:16.148724 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-13 23:56:16.148736 | orchestrator | 2025-05-13 23:56:16.148770 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-13 23:56:16.148782 | orchestrator | Tuesday 13 May 2025 23:55:09 +0000 (0:00:00.296) 0:00:00.296 *********** 2025-05-13 23:56:16.148793 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:56:16.148806 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:56:16.148817 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:56:16.148828 | orchestrator | 2025-05-13 23:56:16.148838 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-13 23:56:16.148849 | orchestrator | Tuesday 13 May 2025 23:55:10 +0000 (0:00:00.338) 0:00:00.635 *********** 2025-05-13 23:56:16.148861 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-05-13 23:56:16.148872 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-05-13 23:56:16.148883 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-05-13 23:56:16.148894 | orchestrator | 2025-05-13 23:56:16.148904 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-05-13 23:56:16.148915 | orchestrator | 2025-05-13 23:56:16.148925 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-05-13 23:56:16.148936 | orchestrator | Tuesday 13 May 2025 23:55:10 +0000 (0:00:00.533) 0:00:01.168 *********** 2025-05-13 23:56:16.148946 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:56:16.148957 | orchestrator | 2025-05-13 23:56:16.148968 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-05-13 23:56:16.148994 | orchestrator | Tuesday 13 May 2025 23:55:11 +0000 (0:00:00.575) 0:00:01.744 *********** 2025-05-13 23:56:16.149005 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-05-13 23:56:16.149015 | orchestrator | 2025-05-13 23:56:16.149076 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-05-13 23:56:16.149088 | orchestrator | Tuesday 13 May 2025 23:55:14 +0000 (0:00:03.474) 0:00:05.218 *********** 2025-05-13 23:56:16.149127 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-05-13 23:56:16.149140 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-05-13 23:56:16.149151 | orchestrator | 2025-05-13 23:56:16.149162 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-05-13 23:56:16.149173 | orchestrator | Tuesday 13 May 2025 23:55:21 +0000 (0:00:06.452) 0:00:11.670 *********** 2025-05-13 23:56:16.149184 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-13 23:56:16.149196 | orchestrator | 2025-05-13 23:56:16.149207 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-05-13 23:56:16.149219 | orchestrator | Tuesday 13 May 2025 23:55:24 +0000 (0:00:03.087) 0:00:14.758 *********** 2025-05-13 23:56:16.149231 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-13 23:56:16.149260 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-05-13 23:56:16.149273 | orchestrator | 2025-05-13 23:56:16.149286 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-05-13 23:56:16.149298 | orchestrator | Tuesday 13 May 2025 23:55:28 +0000 (0:00:03.904) 0:00:18.662 *********** 2025-05-13 23:56:16.149310 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-13 23:56:16.149323 | orchestrator | 2025-05-13 23:56:16.149336 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-05-13 23:56:16.149348 | orchestrator | Tuesday 13 May 2025 23:55:31 +0000 (0:00:03.435) 0:00:22.098 *********** 2025-05-13 23:56:16.149361 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-05-13 23:56:16.149374 | orchestrator | 2025-05-13 23:56:16.149386 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-05-13 23:56:16.149398 | orchestrator | Tuesday 13 May 2025 23:55:35 +0000 (0:00:03.987) 0:00:26.085 *********** 2025-05-13 23:56:16.149410 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:56:16.149434 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:56:16.149447 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:56:16.149460 | orchestrator | 2025-05-13 23:56:16.149473 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-05-13 23:56:16.149500 | orchestrator | Tuesday 13 May 2025 23:55:35 +0000 (0:00:00.322) 0:00:26.408 *********** 2025-05-13 23:56:16.149516 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-13 23:56:16.149548 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-13 23:56:16.149572 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-13 23:56:16.149584 | orchestrator | 2025-05-13 23:56:16.149595 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-05-13 23:56:16.149606 | orchestrator | Tuesday 13 May 2025 23:55:36 +0000 (0:00:00.846) 0:00:27.255 *********** 2025-05-13 23:56:16.149647 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:56:16.149666 | orchestrator | 2025-05-13 23:56:16.149684 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-05-13 23:56:16.149703 | orchestrator | Tuesday 13 May 2025 23:55:36 +0000 (0:00:00.136) 0:00:27.392 *********** 2025-05-13 23:56:16.149720 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:56:16.149735 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:56:16.149746 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:56:16.149757 | orchestrator | 2025-05-13 23:56:16.149787 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-05-13 23:56:16.149798 | orchestrator | Tuesday 13 May 2025 23:55:37 +0000 (0:00:00.526) 0:00:27.918 *********** 2025-05-13 23:56:16.149809 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:56:16.149820 | orchestrator | 2025-05-13 23:56:16.149830 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-05-13 23:56:16.149841 | orchestrator | Tuesday 13 May 2025 23:55:37 +0000 (0:00:00.539) 0:00:28.457 *********** 2025-05-13 23:56:16.149858 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-13 23:56:16.149881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-13 23:56:16.149903 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-13 23:56:16.149915 | orchestrator | 2025-05-13 23:56:16.149926 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-05-13 23:56:16.149937 | orchestrator | Tuesday 13 May 2025 23:55:39 +0000 (0:00:01.408) 0:00:29.866 *********** 2025-05-13 23:56:16.149948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-13 23:56:16.149959 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:56:16.149976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-13 23:56:16.149988 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:56:16.150006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-13 23:56:16.150162 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:56:16.150176 | orchestrator | 2025-05-13 23:56:16.150187 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-05-13 23:56:16.150219 | orchestrator | Tuesday 13 May 2025 23:55:40 +0000 (0:00:00.705) 0:00:30.571 *********** 2025-05-13 23:56:16.150231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-13 23:56:16.150242 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:56:16.150254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-13 23:56:16.150265 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:56:16.150276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-13 23:56:16.150294 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:56:16.150305 | orchestrator | 2025-05-13 23:56:16.150316 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-05-13 23:56:16.150327 | orchestrator | Tuesday 13 May 2025 23:55:40 +0000 (0:00:00.718) 0:00:31.290 *********** 2025-05-13 23:56:16.150346 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-13 23:56:16.150377 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-13 23:56:16.150400 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-13 23:56:16.150422 | orchestrator | 2025-05-13 23:56:16.150436 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-05-13 23:56:16.150447 | orchestrator | Tuesday 13 May 2025 23:55:42 +0000 (0:00:01.362) 0:00:32.653 *********** 2025-05-13 23:56:16.150458 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-13 23:56:16.150475 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-13 23:56:16.150504 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-13 23:56:16.150516 | orchestrator | 2025-05-13 23:56:16.150548 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-05-13 23:56:16.150559 | orchestrator | Tuesday 13 May 2025 23:55:45 +0000 (0:00:03.134) 0:00:35.787 *********** 2025-05-13 23:56:16.150570 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-05-13 23:56:16.150581 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-05-13 23:56:16.150592 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-05-13 23:56:16.150603 | orchestrator | 2025-05-13 23:56:16.150642 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-05-13 23:56:16.150653 | orchestrator | Tuesday 13 May 2025 23:55:46 +0000 (0:00:01.649) 0:00:37.436 *********** 2025-05-13 23:56:16.150664 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:56:16.150675 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:56:16.150686 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:56:16.150697 | orchestrator | 2025-05-13 23:56:16.150707 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-05-13 23:56:16.150718 | orchestrator | Tuesday 13 May 2025 23:55:48 +0000 (0:00:01.644) 0:00:39.080 *********** 2025-05-13 23:56:16.150729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-13 23:56:16.150740 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:56:16.150757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-13 23:56:16.150776 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:56:16.150816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-13 23:56:16.150828 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:56:16.150840 | orchestrator | 2025-05-13 23:56:16.150851 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-05-13 23:56:16.150862 | orchestrator | Tuesday 13 May 2025 23:55:49 +0000 (0:00:00.520) 0:00:39.601 *********** 2025-05-13 23:56:16.150874 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-13 23:56:16.150885 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-13 23:56:16.150918 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-13 23:56:16.150945 | orchestrator | 2025-05-13 23:56:16.150965 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-05-13 23:56:16.150983 | orchestrator | Tuesday 13 May 2025 23:55:50 +0000 (0:00:01.330) 0:00:40.931 *********** 2025-05-13 23:56:16.151000 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:56:16.151029 | orchestrator | 2025-05-13 23:56:16.151052 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-05-13 23:56:16.151072 | orchestrator | Tuesday 13 May 2025 23:55:52 +0000 (0:00:02.088) 0:00:43.020 *********** 2025-05-13 23:56:16.151093 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:56:16.151114 | orchestrator | 2025-05-13 23:56:16.151136 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-05-13 23:56:16.151148 | orchestrator | Tuesday 13 May 2025 23:55:54 +0000 (0:00:02.081) 0:00:45.101 *********** 2025-05-13 23:56:16.151159 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:56:16.151170 | orchestrator | 2025-05-13 23:56:16.151180 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-05-13 23:56:16.151191 | orchestrator | Tuesday 13 May 2025 23:56:07 +0000 (0:00:13.316) 0:00:58.418 *********** 2025-05-13 23:56:16.151202 | orchestrator | 2025-05-13 23:56:16.151212 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-05-13 23:56:16.151223 | orchestrator | Tuesday 13 May 2025 23:56:08 +0000 (0:00:00.069) 0:00:58.487 *********** 2025-05-13 23:56:16.151234 | orchestrator | 2025-05-13 23:56:16.151276 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-05-13 23:56:16.151288 | orchestrator | Tuesday 13 May 2025 23:56:08 +0000 (0:00:00.087) 0:00:58.575 *********** 2025-05-13 23:56:16.151299 | orchestrator | 2025-05-13 23:56:16.151325 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-05-13 23:56:16.151337 | orchestrator | Tuesday 13 May 2025 23:56:08 +0000 (0:00:00.065) 0:00:58.640 *********** 2025-05-13 23:56:16.151347 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:56:16.151358 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:56:16.151368 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:56:16.151379 | orchestrator | 2025-05-13 23:56:16.151390 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 23:56:16.151402 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-13 23:56:16.151415 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-13 23:56:16.151438 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-13 23:56:16.151449 | orchestrator | 2025-05-13 23:56:16.151461 | orchestrator | 2025-05-13 23:56:16.151472 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 23:56:16.151483 | orchestrator | Tuesday 13 May 2025 23:56:15 +0000 (0:00:07.317) 0:01:05.958 *********** 2025-05-13 23:56:16.151493 | orchestrator | =============================================================================== 2025-05-13 23:56:16.151504 | orchestrator | placement : Running placement bootstrap container ---------------------- 13.32s 2025-05-13 23:56:16.151518 | orchestrator | placement : Restart placement-api container ----------------------------- 7.32s 2025-05-13 23:56:16.151579 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.45s 2025-05-13 23:56:16.151599 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 3.99s 2025-05-13 23:56:16.151656 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.90s 2025-05-13 23:56:16.151675 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.47s 2025-05-13 23:56:16.151696 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.44s 2025-05-13 23:56:16.151716 | orchestrator | placement : Copying over placement.conf --------------------------------- 3.13s 2025-05-13 23:56:16.151735 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.09s 2025-05-13 23:56:16.151755 | orchestrator | placement : Creating placement databases -------------------------------- 2.09s 2025-05-13 23:56:16.151776 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.08s 2025-05-13 23:56:16.151821 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.65s 2025-05-13 23:56:16.151833 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.64s 2025-05-13 23:56:16.151843 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.41s 2025-05-13 23:56:16.151854 | orchestrator | placement : Copying over config.json files for services ----------------- 1.36s 2025-05-13 23:56:16.151864 | orchestrator | placement : Check placement containers ---------------------------------- 1.33s 2025-05-13 23:56:16.151875 | orchestrator | placement : Ensuring config directories exist --------------------------- 0.85s 2025-05-13 23:56:16.151886 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.72s 2025-05-13 23:56:16.151896 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 0.71s 2025-05-13 23:56:16.151908 | orchestrator | placement : include_tasks ----------------------------------------------- 0.58s 2025-05-13 23:56:16.151918 | orchestrator | 2025-05-13 23:56:16 | INFO  | Task 3e529a46-4583-4a82-97c2-13356d78342d is in state STARTED 2025-05-13 23:56:16.151946 | orchestrator | 2025-05-13 23:56:16 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:56:19.186457 | orchestrator | 2025-05-13 23:56:19 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:56:19.190767 | orchestrator | 2025-05-13 23:56:19 | INFO  | Task 68ced4d5-20d6-4b31-b28d-ffc4747d44c8 is in state STARTED 2025-05-13 23:56:19.192504 | orchestrator | 2025-05-13 23:56:19 | INFO  | Task 67556688-89f1-41e6-901c-2ee1d629e1de is in state STARTED 2025-05-13 23:56:19.194566 | orchestrator | 2025-05-13 23:56:19 | INFO  | Task 3e529a46-4583-4a82-97c2-13356d78342d is in state STARTED 2025-05-13 23:56:19.194601 | orchestrator | 2025-05-13 23:56:19 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:56:22.232135 | orchestrator | 2025-05-13 23:56:22 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:56:22.238466 | orchestrator | 2025-05-13 23:56:22 | INFO  | Task 68ced4d5-20d6-4b31-b28d-ffc4747d44c8 is in state STARTED 2025-05-13 23:56:22.238643 | orchestrator | 2025-05-13 23:56:22 | INFO  | Task 67556688-89f1-41e6-901c-2ee1d629e1de is in state STARTED 2025-05-13 23:56:22.238669 | orchestrator | 2025-05-13 23:56:22 | INFO  | Task 3e529a46-4583-4a82-97c2-13356d78342d is in state STARTED 2025-05-13 23:56:22.238682 | orchestrator | 2025-05-13 23:56:22 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:56:25.275365 | orchestrator | 2025-05-13 23:56:25 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-13 23:56:25.277146 | orchestrator | 2025-05-13 23:56:25 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:56:25.278874 | orchestrator | 2025-05-13 23:56:25 | INFO  | Task 68ced4d5-20d6-4b31-b28d-ffc4747d44c8 is in state SUCCESS 2025-05-13 23:56:25.278935 | orchestrator | 2025-05-13 23:56:25 | INFO  | Task 67556688-89f1-41e6-901c-2ee1d629e1de is in state STARTED 2025-05-13 23:56:25.283079 | orchestrator | 2025-05-13 23:56:25 | INFO  | Task 3e529a46-4583-4a82-97c2-13356d78342d is in state STARTED 2025-05-13 23:56:25.283145 | orchestrator | 2025-05-13 23:56:25 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:56:28.335900 | orchestrator | 2025-05-13 23:56:28 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-13 23:56:28.336012 | orchestrator | 2025-05-13 23:56:28 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:56:28.336969 | orchestrator | 2025-05-13 23:56:28 | INFO  | Task 67556688-89f1-41e6-901c-2ee1d629e1de is in state STARTED 2025-05-13 23:56:28.340229 | orchestrator | 2025-05-13 23:56:28 | INFO  | Task 3e529a46-4583-4a82-97c2-13356d78342d is in state STARTED 2025-05-13 23:56:28.340262 | orchestrator | 2025-05-13 23:56:28 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:56:31.393425 | orchestrator | 2025-05-13 23:56:31 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-13 23:56:31.393529 | orchestrator | 2025-05-13 23:56:31 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:56:31.394416 | orchestrator | 2025-05-13 23:56:31 | INFO  | Task 67556688-89f1-41e6-901c-2ee1d629e1de is in state STARTED 2025-05-13 23:56:31.396533 | orchestrator | 2025-05-13 23:56:31 | INFO  | Task 3e529a46-4583-4a82-97c2-13356d78342d is in state STARTED 2025-05-13 23:56:31.396579 | orchestrator | 2025-05-13 23:56:31 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:56:34.447038 | orchestrator | 2025-05-13 23:56:34 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-13 23:56:34.448004 | orchestrator | 2025-05-13 23:56:34 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:56:34.449322 | orchestrator | 2025-05-13 23:56:34 | INFO  | Task 67556688-89f1-41e6-901c-2ee1d629e1de is in state STARTED 2025-05-13 23:56:34.451879 | orchestrator | 2025-05-13 23:56:34 | INFO  | Task 3e529a46-4583-4a82-97c2-13356d78342d is in state STARTED 2025-05-13 23:56:34.452192 | orchestrator | 2025-05-13 23:56:34 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:56:37.503014 | orchestrator | 2025-05-13 23:56:37 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-13 23:56:37.504844 | orchestrator | 2025-05-13 23:56:37 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:56:37.507136 | orchestrator | 2025-05-13 23:56:37 | INFO  | Task 67556688-89f1-41e6-901c-2ee1d629e1de is in state STARTED 2025-05-13 23:56:37.510399 | orchestrator | 2025-05-13 23:56:37 | INFO  | Task 3e529a46-4583-4a82-97c2-13356d78342d is in state STARTED 2025-05-13 23:56:37.510444 | orchestrator | 2025-05-13 23:56:37 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:56:40.563775 | orchestrator | 2025-05-13 23:56:40 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-13 23:56:40.564542 | orchestrator | 2025-05-13 23:56:40 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:56:40.567122 | orchestrator | 2025-05-13 23:56:40 | INFO  | Task 67556688-89f1-41e6-901c-2ee1d629e1de is in state STARTED 2025-05-13 23:56:40.571745 | orchestrator | 2025-05-13 23:56:40 | INFO  | Task 3e529a46-4583-4a82-97c2-13356d78342d is in state STARTED 2025-05-13 23:56:40.571855 | orchestrator | 2025-05-13 23:56:40 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:56:43.624707 | orchestrator | 2025-05-13 23:56:43 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-13 23:56:43.632605 | orchestrator | 2025-05-13 23:56:43 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:56:43.633957 | orchestrator | 2025-05-13 23:56:43 | INFO  | Task 67556688-89f1-41e6-901c-2ee1d629e1de is in state STARTED 2025-05-13 23:56:43.636098 | orchestrator | 2025-05-13 23:56:43 | INFO  | Task 3e529a46-4583-4a82-97c2-13356d78342d is in state STARTED 2025-05-13 23:56:43.636153 | orchestrator | 2025-05-13 23:56:43 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:56:46.688229 | orchestrator | 2025-05-13 23:56:46 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-13 23:56:46.690292 | orchestrator | 2025-05-13 23:56:46 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:56:46.692549 | orchestrator | 2025-05-13 23:56:46 | INFO  | Task 67556688-89f1-41e6-901c-2ee1d629e1de is in state STARTED 2025-05-13 23:56:46.694463 | orchestrator | 2025-05-13 23:56:46 | INFO  | Task 3e529a46-4583-4a82-97c2-13356d78342d is in state STARTED 2025-05-13 23:56:46.694507 | orchestrator | 2025-05-13 23:56:46 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:56:49.749799 | orchestrator | 2025-05-13 23:56:49 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-13 23:56:49.752462 | orchestrator | 2025-05-13 23:56:49 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:56:49.754602 | orchestrator | 2025-05-13 23:56:49 | INFO  | Task 67556688-89f1-41e6-901c-2ee1d629e1de is in state STARTED 2025-05-13 23:56:49.756488 | orchestrator | 2025-05-13 23:56:49 | INFO  | Task 3e529a46-4583-4a82-97c2-13356d78342d is in state STARTED 2025-05-13 23:56:49.756515 | orchestrator | 2025-05-13 23:56:49 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:56:52.806405 | orchestrator | 2025-05-13 23:56:52 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-13 23:56:52.808513 | orchestrator | 2025-05-13 23:56:52 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:56:52.810716 | orchestrator | 2025-05-13 23:56:52 | INFO  | Task 67556688-89f1-41e6-901c-2ee1d629e1de is in state STARTED 2025-05-13 23:56:52.812057 | orchestrator | 2025-05-13 23:56:52 | INFO  | Task 3e529a46-4583-4a82-97c2-13356d78342d is in state STARTED 2025-05-13 23:56:52.812195 | orchestrator | 2025-05-13 23:56:52 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:56:55.866248 | orchestrator | 2025-05-13 23:56:55 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-13 23:56:55.869466 | orchestrator | 2025-05-13 23:56:55 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:56:55.871637 | orchestrator | 2025-05-13 23:56:55 | INFO  | Task 67556688-89f1-41e6-901c-2ee1d629e1de is in state STARTED 2025-05-13 23:56:55.873301 | orchestrator | 2025-05-13 23:56:55 | INFO  | Task 3e529a46-4583-4a82-97c2-13356d78342d is in state STARTED 2025-05-13 23:56:55.873366 | orchestrator | 2025-05-13 23:56:55 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:56:58.913797 | orchestrator | 2025-05-13 23:56:58 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-13 23:56:58.914245 | orchestrator | 2025-05-13 23:56:58 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:56:58.915256 | orchestrator | 2025-05-13 23:56:58 | INFO  | Task 67556688-89f1-41e6-901c-2ee1d629e1de is in state STARTED 2025-05-13 23:56:58.916065 | orchestrator | 2025-05-13 23:56:58 | INFO  | Task 3e529a46-4583-4a82-97c2-13356d78342d is in state STARTED 2025-05-13 23:56:58.916088 | orchestrator | 2025-05-13 23:56:58 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:57:01.948523 | orchestrator | 2025-05-13 23:57:01 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-13 23:57:01.948780 | orchestrator | 2025-05-13 23:57:01 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:57:01.949189 | orchestrator | 2025-05-13 23:57:01 | INFO  | Task 67556688-89f1-41e6-901c-2ee1d629e1de is in state STARTED 2025-05-13 23:57:01.949954 | orchestrator | 2025-05-13 23:57:01 | INFO  | Task 3e529a46-4583-4a82-97c2-13356d78342d is in state STARTED 2025-05-13 23:57:01.949983 | orchestrator | 2025-05-13 23:57:01 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:57:04.992492 | orchestrator | 2025-05-13 23:57:04 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-13 23:57:04.994199 | orchestrator | 2025-05-13 23:57:04 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:57:04.996282 | orchestrator | 2025-05-13 23:57:04 | INFO  | Task 67556688-89f1-41e6-901c-2ee1d629e1de is in state STARTED 2025-05-13 23:57:04.997905 | orchestrator | 2025-05-13 23:57:04 | INFO  | Task 3e529a46-4583-4a82-97c2-13356d78342d is in state STARTED 2025-05-13 23:57:04.997930 | orchestrator | 2025-05-13 23:57:04 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:57:08.053525 | orchestrator | 2025-05-13 23:57:08 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-13 23:57:08.053672 | orchestrator | 2025-05-13 23:57:08 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:57:08.058268 | orchestrator | 2025-05-13 23:57:08 | INFO  | Task 67556688-89f1-41e6-901c-2ee1d629e1de is in state STARTED 2025-05-13 23:57:08.060401 | orchestrator | 2025-05-13 23:57:08 | INFO  | Task 3e529a46-4583-4a82-97c2-13356d78342d is in state STARTED 2025-05-13 23:57:08.061853 | orchestrator | 2025-05-13 23:57:08 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:57:11.094147 | orchestrator | 2025-05-13 23:57:11 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-13 23:57:11.095231 | orchestrator | 2025-05-13 23:57:11 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:57:11.096565 | orchestrator | 2025-05-13 23:57:11 | INFO  | Task 67556688-89f1-41e6-901c-2ee1d629e1de is in state STARTED 2025-05-13 23:57:11.097857 | orchestrator | 2025-05-13 23:57:11 | INFO  | Task 3e529a46-4583-4a82-97c2-13356d78342d is in state STARTED 2025-05-13 23:57:11.098274 | orchestrator | 2025-05-13 23:57:11 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:57:14.139371 | orchestrator | 2025-05-13 23:57:14 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-13 23:57:14.141322 | orchestrator | 2025-05-13 23:57:14 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:57:14.144820 | orchestrator | 2025-05-13 23:57:14 | INFO  | Task 67556688-89f1-41e6-901c-2ee1d629e1de is in state STARTED 2025-05-13 23:57:14.148203 | orchestrator | 2025-05-13 23:57:14 | INFO  | Task 3e529a46-4583-4a82-97c2-13356d78342d is in state STARTED 2025-05-13 23:57:14.148267 | orchestrator | 2025-05-13 23:57:14 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:57:17.190217 | orchestrator | 2025-05-13 23:57:17 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-13 23:57:17.193369 | orchestrator | 2025-05-13 23:57:17 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:57:17.195602 | orchestrator | 2025-05-13 23:57:17 | INFO  | Task 67556688-89f1-41e6-901c-2ee1d629e1de is in state STARTED 2025-05-13 23:57:17.197986 | orchestrator | 2025-05-13 23:57:17 | INFO  | Task 3e529a46-4583-4a82-97c2-13356d78342d is in state STARTED 2025-05-13 23:57:17.198322 | orchestrator | 2025-05-13 23:57:17 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:57:20.237109 | orchestrator | 2025-05-13 23:57:20 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-13 23:57:20.238180 | orchestrator | 2025-05-13 23:57:20 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:57:20.239083 | orchestrator | 2025-05-13 23:57:20 | INFO  | Task 67556688-89f1-41e6-901c-2ee1d629e1de is in state STARTED 2025-05-13 23:57:20.240772 | orchestrator | 2025-05-13 23:57:20 | INFO  | Task 3e529a46-4583-4a82-97c2-13356d78342d is in state STARTED 2025-05-13 23:57:20.240893 | orchestrator | 2025-05-13 23:57:20 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:57:23.285746 | orchestrator | 2025-05-13 23:57:23 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-13 23:57:23.287226 | orchestrator | 2025-05-13 23:57:23 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:57:23.289171 | orchestrator | 2025-05-13 23:57:23 | INFO  | Task 67556688-89f1-41e6-901c-2ee1d629e1de is in state STARTED 2025-05-13 23:57:23.291159 | orchestrator | 2025-05-13 23:57:23 | INFO  | Task 3e529a46-4583-4a82-97c2-13356d78342d is in state STARTED 2025-05-13 23:57:23.291934 | orchestrator | 2025-05-13 23:57:23 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:57:26.329259 | orchestrator | 2025-05-13 23:57:26 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-13 23:57:26.330682 | orchestrator | 2025-05-13 23:57:26 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:57:26.331937 | orchestrator | 2025-05-13 23:57:26 | INFO  | Task 67556688-89f1-41e6-901c-2ee1d629e1de is in state STARTED 2025-05-13 23:57:26.333432 | orchestrator | 2025-05-13 23:57:26 | INFO  | Task 3e529a46-4583-4a82-97c2-13356d78342d is in state STARTED 2025-05-13 23:57:26.333460 | orchestrator | 2025-05-13 23:57:26 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:57:29.376880 | orchestrator | 2025-05-13 23:57:29 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-13 23:57:29.376989 | orchestrator | 2025-05-13 23:57:29 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:57:29.377003 | orchestrator | 2025-05-13 23:57:29 | INFO  | Task 67556688-89f1-41e6-901c-2ee1d629e1de is in state STARTED 2025-05-13 23:57:29.377015 | orchestrator | 2025-05-13 23:57:29 | INFO  | Task 3e529a46-4583-4a82-97c2-13356d78342d is in state STARTED 2025-05-13 23:57:29.377026 | orchestrator | 2025-05-13 23:57:29 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:57:32.402380 | orchestrator | 2025-05-13 23:57:32 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-13 23:57:32.402890 | orchestrator | 2025-05-13 23:57:32 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:57:32.404096 | orchestrator | 2025-05-13 23:57:32 | INFO  | Task 67556688-89f1-41e6-901c-2ee1d629e1de is in state STARTED 2025-05-13 23:57:32.404664 | orchestrator | 2025-05-13 23:57:32 | INFO  | Task 3e529a46-4583-4a82-97c2-13356d78342d is in state STARTED 2025-05-13 23:57:32.404870 | orchestrator | 2025-05-13 23:57:32 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:57:35.457760 | orchestrator | 2025-05-13 23:57:35 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-13 23:57:35.463179 | orchestrator | 2025-05-13 23:57:35 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:57:35.464726 | orchestrator | 2025-05-13 23:57:35 | INFO  | Task 67556688-89f1-41e6-901c-2ee1d629e1de is in state STARTED 2025-05-13 23:57:35.468019 | orchestrator | 2025-05-13 23:57:35 | INFO  | Task 3e529a46-4583-4a82-97c2-13356d78342d is in state SUCCESS 2025-05-13 23:57:35.468082 | orchestrator | 2025-05-13 23:57:35 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:57:35.469776 | orchestrator | 2025-05-13 23:57:35.469820 | orchestrator | 2025-05-13 23:57:35.469833 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-13 23:57:35.469844 | orchestrator | 2025-05-13 23:57:35.469855 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-13 23:57:35.469866 | orchestrator | Tuesday 13 May 2025 23:56:20 +0000 (0:00:00.188) 0:00:00.188 *********** 2025-05-13 23:57:35.469877 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:57:35.469889 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:57:35.469900 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:57:35.469911 | orchestrator | 2025-05-13 23:57:35.469922 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-13 23:57:35.469933 | orchestrator | Tuesday 13 May 2025 23:56:20 +0000 (0:00:00.302) 0:00:00.491 *********** 2025-05-13 23:57:35.469944 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-05-13 23:57:35.469954 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-05-13 23:57:35.469965 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-05-13 23:57:35.469975 | orchestrator | 2025-05-13 23:57:35.469986 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-05-13 23:57:35.469997 | orchestrator | 2025-05-13 23:57:35.470007 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-05-13 23:57:35.470112 | orchestrator | Tuesday 13 May 2025 23:56:21 +0000 (0:00:00.683) 0:00:01.175 *********** 2025-05-13 23:57:35.470126 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:57:35.470137 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:57:35.470148 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:57:35.470159 | orchestrator | 2025-05-13 23:57:35.470170 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 23:57:35.470207 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 23:57:35.470234 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 23:57:35.470245 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 23:57:35.470256 | orchestrator | 2025-05-13 23:57:35.470266 | orchestrator | 2025-05-13 23:57:35.470277 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 23:57:35.470288 | orchestrator | Tuesday 13 May 2025 23:56:22 +0000 (0:00:00.727) 0:00:01.902 *********** 2025-05-13 23:57:35.470299 | orchestrator | =============================================================================== 2025-05-13 23:57:35.470309 | orchestrator | Waiting for Nova public port to be UP ----------------------------------- 0.73s 2025-05-13 23:57:35.470320 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.68s 2025-05-13 23:57:35.470330 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.30s 2025-05-13 23:57:35.470341 | orchestrator | 2025-05-13 23:57:35.470351 | orchestrator | 2025-05-13 23:57:35.470362 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-13 23:57:35.470373 | orchestrator | 2025-05-13 23:57:35.470385 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-13 23:57:35.470417 | orchestrator | Tuesday 13 May 2025 23:55:37 +0000 (0:00:00.276) 0:00:00.276 *********** 2025-05-13 23:57:35.470429 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:57:35.470441 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:57:35.470454 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:57:35.470466 | orchestrator | 2025-05-13 23:57:35.470479 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-13 23:57:35.470508 | orchestrator | Tuesday 13 May 2025 23:55:37 +0000 (0:00:00.289) 0:00:00.566 *********** 2025-05-13 23:57:35.470521 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-05-13 23:57:35.470533 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-05-13 23:57:35.470545 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-05-13 23:57:35.470557 | orchestrator | 2025-05-13 23:57:35.470569 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-05-13 23:57:35.470582 | orchestrator | 2025-05-13 23:57:35.470594 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-05-13 23:57:35.470605 | orchestrator | Tuesday 13 May 2025 23:55:38 +0000 (0:00:00.459) 0:00:01.026 *********** 2025-05-13 23:57:35.470616 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:57:35.470627 | orchestrator | 2025-05-13 23:57:35.470637 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-05-13 23:57:35.470648 | orchestrator | Tuesday 13 May 2025 23:55:38 +0000 (0:00:00.541) 0:00:01.567 *********** 2025-05-13 23:57:35.470659 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-05-13 23:57:35.470670 | orchestrator | 2025-05-13 23:57:35.470681 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-05-13 23:57:35.470691 | orchestrator | Tuesday 13 May 2025 23:55:42 +0000 (0:00:03.663) 0:00:05.231 *********** 2025-05-13 23:57:35.470702 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-05-13 23:57:35.470713 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-05-13 23:57:35.470724 | orchestrator | 2025-05-13 23:57:35.470734 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-05-13 23:57:35.470745 | orchestrator | Tuesday 13 May 2025 23:55:48 +0000 (0:00:06.491) 0:00:11.723 *********** 2025-05-13 23:57:35.470755 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-13 23:57:35.470766 | orchestrator | 2025-05-13 23:57:35.470777 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-05-13 23:57:35.470787 | orchestrator | Tuesday 13 May 2025 23:55:51 +0000 (0:00:03.125) 0:00:14.848 *********** 2025-05-13 23:57:35.470811 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-13 23:57:35.470823 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-05-13 23:57:35.470834 | orchestrator | 2025-05-13 23:57:35.470844 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-05-13 23:57:35.470855 | orchestrator | Tuesday 13 May 2025 23:55:55 +0000 (0:00:03.812) 0:00:18.661 *********** 2025-05-13 23:57:35.470866 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-13 23:57:35.470877 | orchestrator | 2025-05-13 23:57:35.470887 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-05-13 23:57:35.470898 | orchestrator | Tuesday 13 May 2025 23:55:59 +0000 (0:00:03.264) 0:00:21.925 *********** 2025-05-13 23:57:35.470909 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-05-13 23:57:35.470919 | orchestrator | 2025-05-13 23:57:35.470930 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-05-13 23:57:35.470940 | orchestrator | Tuesday 13 May 2025 23:56:02 +0000 (0:00:03.871) 0:00:25.796 *********** 2025-05-13 23:57:35.470951 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:57:35.470962 | orchestrator | 2025-05-13 23:57:35.470973 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-05-13 23:57:35.470995 | orchestrator | Tuesday 13 May 2025 23:56:06 +0000 (0:00:03.159) 0:00:28.955 *********** 2025-05-13 23:57:35.471013 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:57:35.471024 | orchestrator | 2025-05-13 23:57:35.471034 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-05-13 23:57:35.471045 | orchestrator | Tuesday 13 May 2025 23:56:09 +0000 (0:00:03.858) 0:00:32.814 *********** 2025-05-13 23:57:35.471056 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:57:35.471066 | orchestrator | 2025-05-13 23:57:35.471077 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-05-13 23:57:35.471088 | orchestrator | Tuesday 13 May 2025 23:56:13 +0000 (0:00:03.583) 0:00:36.397 *********** 2025-05-13 23:57:35.471107 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-13 23:57:35.471122 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-13 23:57:35.471134 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-13 23:57:35.471154 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-13 23:57:35.471177 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-13 23:57:35.471189 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-13 23:57:35.471200 | orchestrator | 2025-05-13 23:57:35.471212 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-05-13 23:57:35.471224 | orchestrator | Tuesday 13 May 2025 23:56:14 +0000 (0:00:01.447) 0:00:37.845 *********** 2025-05-13 23:57:35.471234 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:57:35.471245 | orchestrator | 2025-05-13 23:57:35.471256 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-05-13 23:57:35.471267 | orchestrator | Tuesday 13 May 2025 23:56:15 +0000 (0:00:00.144) 0:00:37.989 *********** 2025-05-13 23:57:35.471277 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:57:35.471288 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:57:35.471299 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:57:35.471309 | orchestrator | 2025-05-13 23:57:35.471320 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-05-13 23:57:35.471331 | orchestrator | Tuesday 13 May 2025 23:56:15 +0000 (0:00:00.615) 0:00:38.605 *********** 2025-05-13 23:57:35.471341 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-13 23:57:35.471352 | orchestrator | 2025-05-13 23:57:35.471362 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-05-13 23:57:35.471373 | orchestrator | Tuesday 13 May 2025 23:56:16 +0000 (0:00:00.919) 0:00:39.525 *********** 2025-05-13 23:57:35.471384 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-13 23:57:35.471404 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-13 23:57:35.471426 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-13 23:57:35.471438 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-13 23:57:35.471450 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-13 23:57:35.471462 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-13 23:57:35.471473 | orchestrator | 2025-05-13 23:57:35.471511 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-05-13 23:57:35.471524 | orchestrator | Tuesday 13 May 2025 23:56:19 +0000 (0:00:02.661) 0:00:42.186 *********** 2025-05-13 23:57:35.471535 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:57:35.471545 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:57:35.471556 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:57:35.471567 | orchestrator | 2025-05-13 23:57:35.471577 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-05-13 23:57:35.471594 | orchestrator | Tuesday 13 May 2025 23:56:19 +0000 (0:00:00.347) 0:00:42.534 *********** 2025-05-13 23:57:35.471605 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:57:35.471616 | orchestrator | 2025-05-13 23:57:35.471627 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-05-13 23:57:35.471638 | orchestrator | Tuesday 13 May 2025 23:56:20 +0000 (0:00:00.848) 0:00:43.382 *********** 2025-05-13 23:57:35.471650 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-13 23:57:35.471666 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-13 23:57:35.471678 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-13 23:57:35.471690 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-13 23:57:35.471714 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-13 23:57:35.471726 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-13 23:57:35.471738 | orchestrator | 2025-05-13 23:57:35.471754 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-05-13 23:57:35.471780 | orchestrator | Tuesday 13 May 2025 23:56:23 +0000 (0:00:02.580) 0:00:45.963 *********** 2025-05-13 23:57:35.471801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-13 23:57:35.471823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-13 23:57:35.471843 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:57:35.471864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-13 23:57:35.471907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-13 23:57:35.471931 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:57:35.471961 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-13 23:57:35.471982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-13 23:57:35.471995 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:57:35.472005 | orchestrator | 2025-05-13 23:57:35.472016 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-05-13 23:57:35.472027 | orchestrator | Tuesday 13 May 2025 23:56:23 +0000 (0:00:00.671) 0:00:46.635 *********** 2025-05-13 23:57:35.472038 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-13 23:57:35.472062 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-13 23:57:35.472074 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:57:35.472092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-13 23:57:35.472109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-13 23:57:35.472121 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:57:35.472132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-13 23:57:35.472144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-13 23:57:35.472162 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:57:35.472173 | orchestrator | 2025-05-13 23:57:35.472183 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-05-13 23:57:35.472194 | orchestrator | Tuesday 13 May 2025 23:56:25 +0000 (0:00:01.522) 0:00:48.157 *********** 2025-05-13 23:57:35.472450 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-13 23:57:35.472471 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-13 23:57:35.472613 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-13 23:57:35.472663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-13 23:57:35.472687 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-13 23:57:35.472712 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-13 23:57:35.472723 | orchestrator | 2025-05-13 23:57:35.472735 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-05-13 23:57:35.472746 | orchestrator | Tuesday 13 May 2025 23:56:27 +0000 (0:00:02.485) 0:00:50.643 *********** 2025-05-13 23:57:35.472757 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-13 23:57:35.472774 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-13 23:57:35.472792 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-13 23:57:35.472804 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-13 23:57:35.472821 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-13 23:57:35.472833 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-13 23:57:35.472844 | orchestrator | 2025-05-13 23:57:35.472854 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-05-13 23:57:35.472867 | orchestrator | Tuesday 13 May 2025 23:56:33 +0000 (0:00:05.382) 0:00:56.026 *********** 2025-05-13 23:57:35.472877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-13 23:57:35.472893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-13 23:57:35.472904 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:57:35.472914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-13 23:57:35.472930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-13 23:57:35.472940 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:57:35.472954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-13 23:57:35.472965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-13 23:57:35.472980 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:57:35.472990 | orchestrator | 2025-05-13 23:57:35.472999 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-05-13 23:57:35.473009 | orchestrator | Tuesday 13 May 2025 23:56:34 +0000 (0:00:00.967) 0:00:56.993 *********** 2025-05-13 23:57:35.473037 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-13 23:57:35.473062 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-13 23:57:35.473074 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-13 23:57:35.473090 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-13 23:57:35.473111 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-13 23:57:35.473123 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-13 23:57:35.473134 | orchestrator | 2025-05-13 23:57:35.473144 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-05-13 23:57:35.473155 | orchestrator | Tuesday 13 May 2025 23:56:36 +0000 (0:00:02.163) 0:00:59.157 *********** 2025-05-13 23:57:35.473166 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:57:35.473178 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:57:35.473188 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:57:35.473199 | orchestrator | 2025-05-13 23:57:35.473209 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-05-13 23:57:35.473220 | orchestrator | Tuesday 13 May 2025 23:56:36 +0000 (0:00:00.292) 0:00:59.449 *********** 2025-05-13 23:57:35.473231 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:57:35.473241 | orchestrator | 2025-05-13 23:57:35.473252 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-05-13 23:57:35.473263 | orchestrator | Tuesday 13 May 2025 23:56:38 +0000 (0:00:01.995) 0:01:01.444 *********** 2025-05-13 23:57:35.473274 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:57:35.473285 | orchestrator | 2025-05-13 23:57:35.473296 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-05-13 23:57:35.473307 | orchestrator | Tuesday 13 May 2025 23:56:40 +0000 (0:00:02.104) 0:01:03.549 *********** 2025-05-13 23:57:35.473324 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:57:35.473335 | orchestrator | 2025-05-13 23:57:35.473346 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-05-13 23:57:35.473357 | orchestrator | Tuesday 13 May 2025 23:56:56 +0000 (0:00:15.569) 0:01:19.118 *********** 2025-05-13 23:57:35.473368 | orchestrator | 2025-05-13 23:57:35.473379 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-05-13 23:57:35.473390 | orchestrator | Tuesday 13 May 2025 23:56:56 +0000 (0:00:00.078) 0:01:19.196 *********** 2025-05-13 23:57:35.473400 | orchestrator | 2025-05-13 23:57:35.473410 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-05-13 23:57:35.473419 | orchestrator | Tuesday 13 May 2025 23:56:56 +0000 (0:00:00.060) 0:01:19.257 *********** 2025-05-13 23:57:35.473429 | orchestrator | 2025-05-13 23:57:35.473438 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-05-13 23:57:35.473453 | orchestrator | Tuesday 13 May 2025 23:56:56 +0000 (0:00:00.069) 0:01:19.327 *********** 2025-05-13 23:57:35.473463 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:57:35.473472 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:57:35.473482 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:57:35.473519 | orchestrator | 2025-05-13 23:57:35.473534 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-05-13 23:57:35.473544 | orchestrator | Tuesday 13 May 2025 23:57:16 +0000 (0:00:19.570) 0:01:38.897 *********** 2025-05-13 23:57:35.473553 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:57:35.473562 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:57:35.473571 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:57:35.473581 | orchestrator | 2025-05-13 23:57:35.473590 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 23:57:35.473601 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-13 23:57:35.473611 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-13 23:57:35.473620 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-13 23:57:35.473630 | orchestrator | 2025-05-13 23:57:35.473639 | orchestrator | 2025-05-13 23:57:35.473649 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 23:57:35.473658 | orchestrator | Tuesday 13 May 2025 23:57:32 +0000 (0:00:16.263) 0:01:55.161 *********** 2025-05-13 23:57:35.473668 | orchestrator | =============================================================================== 2025-05-13 23:57:35.473677 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 19.57s 2025-05-13 23:57:35.473687 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 16.26s 2025-05-13 23:57:35.473696 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 15.57s 2025-05-13 23:57:35.473705 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.49s 2025-05-13 23:57:35.473714 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 5.38s 2025-05-13 23:57:35.473724 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 3.87s 2025-05-13 23:57:35.473733 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.86s 2025-05-13 23:57:35.473743 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.81s 2025-05-13 23:57:35.473752 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.66s 2025-05-13 23:57:35.473761 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.58s 2025-05-13 23:57:35.473771 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.26s 2025-05-13 23:57:35.473780 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.16s 2025-05-13 23:57:35.473789 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.13s 2025-05-13 23:57:35.473798 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.66s 2025-05-13 23:57:35.473808 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.58s 2025-05-13 23:57:35.473817 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.49s 2025-05-13 23:57:35.473827 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.16s 2025-05-13 23:57:35.473836 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.10s 2025-05-13 23:57:35.473846 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.00s 2025-05-13 23:57:35.473855 | orchestrator | service-cert-copy : magnum | Copying over backend internal TLS key ------ 1.52s 2025-05-13 23:57:38.518924 | orchestrator | 2025-05-13 23:57:38 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-13 23:57:38.522972 | orchestrator | 2025-05-13 23:57:38 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:57:38.527052 | orchestrator | 2025-05-13 23:57:38 | INFO  | Task 67556688-89f1-41e6-901c-2ee1d629e1de is in state STARTED 2025-05-13 23:57:38.527380 | orchestrator | 2025-05-13 23:57:38 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:57:41.577065 | orchestrator | 2025-05-13 23:57:41 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-13 23:57:41.579314 | orchestrator | 2025-05-13 23:57:41 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:57:41.581284 | orchestrator | 2025-05-13 23:57:41 | INFO  | Task 67556688-89f1-41e6-901c-2ee1d629e1de is in state STARTED 2025-05-13 23:57:41.581326 | orchestrator | 2025-05-13 23:57:41 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:57:44.632032 | orchestrator | 2025-05-13 23:57:44 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-13 23:57:44.635456 | orchestrator | 2025-05-13 23:57:44 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:57:44.636964 | orchestrator | 2025-05-13 23:57:44 | INFO  | Task 67556688-89f1-41e6-901c-2ee1d629e1de is in state STARTED 2025-05-13 23:57:44.637078 | orchestrator | 2025-05-13 23:57:44 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:57:47.686187 | orchestrator | 2025-05-13 23:57:47 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-13 23:57:47.691715 | orchestrator | 2025-05-13 23:57:47 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:57:47.694511 | orchestrator | 2025-05-13 23:57:47 | INFO  | Task 67556688-89f1-41e6-901c-2ee1d629e1de is in state STARTED 2025-05-13 23:57:47.695969 | orchestrator | 2025-05-13 23:57:47 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:57:50.737588 | orchestrator | 2025-05-13 23:57:50 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-13 23:57:50.738996 | orchestrator | 2025-05-13 23:57:50 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:57:50.740594 | orchestrator | 2025-05-13 23:57:50 | INFO  | Task 67556688-89f1-41e6-901c-2ee1d629e1de is in state STARTED 2025-05-13 23:57:50.740638 | orchestrator | 2025-05-13 23:57:50 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:57:53.769813 | orchestrator | 2025-05-13 23:57:53 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-13 23:57:53.770165 | orchestrator | 2025-05-13 23:57:53 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:57:53.770773 | orchestrator | 2025-05-13 23:57:53 | INFO  | Task 67556688-89f1-41e6-901c-2ee1d629e1de is in state STARTED 2025-05-13 23:57:53.770783 | orchestrator | 2025-05-13 23:57:53 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:57:56.804144 | orchestrator | 2025-05-13 23:57:56 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-13 23:57:56.806372 | orchestrator | 2025-05-13 23:57:56 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:57:56.809652 | orchestrator | 2025-05-13 23:57:56 | INFO  | Task 67556688-89f1-41e6-901c-2ee1d629e1de is in state STARTED 2025-05-13 23:57:56.809690 | orchestrator | 2025-05-13 23:57:56 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:57:59.837795 | orchestrator | 2025-05-13 23:57:59 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-13 23:57:59.838144 | orchestrator | 2025-05-13 23:57:59 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:57:59.839270 | orchestrator | 2025-05-13 23:57:59 | INFO  | Task 67556688-89f1-41e6-901c-2ee1d629e1de is in state STARTED 2025-05-13 23:57:59.839290 | orchestrator | 2025-05-13 23:57:59 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:58:02.880833 | orchestrator | 2025-05-13 23:58:02 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-13 23:58:02.882697 | orchestrator | 2025-05-13 23:58:02 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:58:02.884436 | orchestrator | 2025-05-13 23:58:02 | INFO  | Task 67556688-89f1-41e6-901c-2ee1d629e1de is in state STARTED 2025-05-13 23:58:02.884485 | orchestrator | 2025-05-13 23:58:02 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:58:05.926248 | orchestrator | 2025-05-13 23:58:05 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-13 23:58:05.927405 | orchestrator | 2025-05-13 23:58:05 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:58:05.928914 | orchestrator | 2025-05-13 23:58:05 | INFO  | Task 67556688-89f1-41e6-901c-2ee1d629e1de is in state STARTED 2025-05-13 23:58:05.928949 | orchestrator | 2025-05-13 23:58:05 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:58:08.981300 | orchestrator | 2025-05-13 23:58:08 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-13 23:58:08.983401 | orchestrator | 2025-05-13 23:58:08 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:58:08.985708 | orchestrator | 2025-05-13 23:58:08 | INFO  | Task 67556688-89f1-41e6-901c-2ee1d629e1de is in state STARTED 2025-05-13 23:58:08.985984 | orchestrator | 2025-05-13 23:58:08 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:58:12.042669 | orchestrator | 2025-05-13 23:58:12 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-13 23:58:12.044203 | orchestrator | 2025-05-13 23:58:12 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:58:12.045744 | orchestrator | 2025-05-13 23:58:12 | INFO  | Task 67556688-89f1-41e6-901c-2ee1d629e1de is in state STARTED 2025-05-13 23:58:12.045780 | orchestrator | 2025-05-13 23:58:12 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:58:15.084348 | orchestrator | 2025-05-13 23:58:15 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-13 23:58:15.084785 | orchestrator | 2025-05-13 23:58:15 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:58:15.087272 | orchestrator | 2025-05-13 23:58:15 | INFO  | Task 67556688-89f1-41e6-901c-2ee1d629e1de is in state SUCCESS 2025-05-13 23:58:15.089531 | orchestrator | 2025-05-13 23:58:15.089602 | orchestrator | 2025-05-13 23:58:15.089617 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-13 23:58:15.089630 | orchestrator | 2025-05-13 23:58:15.089642 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-13 23:58:15.089670 | orchestrator | Tuesday 13 May 2025 23:55:55 +0000 (0:00:00.302) 0:00:00.302 *********** 2025-05-13 23:58:15.089682 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:58:15.089694 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:58:15.089705 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:58:15.089716 | orchestrator | 2025-05-13 23:58:15.089728 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-13 23:58:15.089738 | orchestrator | Tuesday 13 May 2025 23:55:55 +0000 (0:00:00.312) 0:00:00.614 *********** 2025-05-13 23:58:15.089749 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-05-13 23:58:15.089858 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-05-13 23:58:15.089883 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-05-13 23:58:15.090176 | orchestrator | 2025-05-13 23:58:15.090207 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-05-13 23:58:15.090227 | orchestrator | 2025-05-13 23:58:15.090243 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-05-13 23:58:15.090256 | orchestrator | Tuesday 13 May 2025 23:55:56 +0000 (0:00:00.493) 0:00:01.108 *********** 2025-05-13 23:58:15.090268 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:58:15.090281 | orchestrator | 2025-05-13 23:58:15.090294 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-05-13 23:58:15.090307 | orchestrator | Tuesday 13 May 2025 23:55:56 +0000 (0:00:00.573) 0:00:01.682 *********** 2025-05-13 23:58:15.090323 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-13 23:58:15.090341 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-13 23:58:15.090355 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-13 23:58:15.090368 | orchestrator | 2025-05-13 23:58:15.090381 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-05-13 23:58:15.090397 | orchestrator | Tuesday 13 May 2025 23:55:57 +0000 (0:00:00.772) 0:00:02.454 *********** 2025-05-13 23:58:15.090416 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-05-13 23:58:15.090478 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-05-13 23:58:15.090496 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-13 23:58:15.090515 | orchestrator | 2025-05-13 23:58:15.090534 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-05-13 23:58:15.090554 | orchestrator | Tuesday 13 May 2025 23:55:58 +0000 (0:00:00.942) 0:00:03.396 *********** 2025-05-13 23:58:15.090572 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:58:15.090591 | orchestrator | 2025-05-13 23:58:15.090648 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-05-13 23:58:15.090677 | orchestrator | Tuesday 13 May 2025 23:55:59 +0000 (0:00:00.730) 0:00:04.127 *********** 2025-05-13 23:58:15.090715 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-13 23:58:15.090728 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-13 23:58:15.090740 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-13 23:58:15.090763 | orchestrator | 2025-05-13 23:58:15.090775 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-05-13 23:58:15.090785 | orchestrator | Tuesday 13 May 2025 23:56:00 +0000 (0:00:01.458) 0:00:05.586 *********** 2025-05-13 23:58:15.090797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-13 23:58:15.090808 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:58:15.090820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-13 23:58:15.090831 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:58:15.090852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-13 23:58:15.090878 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:58:15.090896 | orchestrator | 2025-05-13 23:58:15.090920 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-05-13 23:58:15.091058 | orchestrator | Tuesday 13 May 2025 23:56:00 +0000 (0:00:00.355) 0:00:05.941 *********** 2025-05-13 23:58:15.091076 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-13 23:58:15.091088 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:58:15.091100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-13 23:58:15.091111 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:58:15.091123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-13 23:58:15.091135 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:58:15.091146 | orchestrator | 2025-05-13 23:58:15.091157 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-05-13 23:58:15.091168 | orchestrator | Tuesday 13 May 2025 23:56:01 +0000 (0:00:00.848) 0:00:06.790 *********** 2025-05-13 23:58:15.091179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-13 23:58:15.091200 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-13 23:58:15.091228 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-13 23:58:15.091241 | orchestrator | 2025-05-13 23:58:15.091259 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-05-13 23:58:15.091278 | orchestrator | Tuesday 13 May 2025 23:56:03 +0000 (0:00:01.346) 0:00:08.136 *********** 2025-05-13 23:58:15.091301 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-13 23:58:15.091324 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-13 23:58:15.091338 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-13 23:58:15.091349 | orchestrator | 2025-05-13 23:58:15.091360 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-05-13 23:58:15.091371 | orchestrator | Tuesday 13 May 2025 23:56:04 +0000 (0:00:01.444) 0:00:09.580 *********** 2025-05-13 23:58:15.091390 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:58:15.091401 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:58:15.091411 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:58:15.091422 | orchestrator | 2025-05-13 23:58:15.091454 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-05-13 23:58:15.091465 | orchestrator | Tuesday 13 May 2025 23:56:05 +0000 (0:00:00.456) 0:00:10.037 *********** 2025-05-13 23:58:15.091476 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-05-13 23:58:15.091488 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-05-13 23:58:15.091498 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-05-13 23:58:15.091509 | orchestrator | 2025-05-13 23:58:15.091520 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-05-13 23:58:15.091530 | orchestrator | Tuesday 13 May 2025 23:56:06 +0000 (0:00:01.217) 0:00:11.255 *********** 2025-05-13 23:58:15.091540 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-05-13 23:58:15.091552 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-05-13 23:58:15.091562 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-05-13 23:58:15.091573 | orchestrator | 2025-05-13 23:58:15.091583 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-05-13 23:58:15.091594 | orchestrator | Tuesday 13 May 2025 23:56:07 +0000 (0:00:01.252) 0:00:12.507 *********** 2025-05-13 23:58:15.091613 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-13 23:58:15.091624 | orchestrator | 2025-05-13 23:58:15.091635 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-05-13 23:58:15.091646 | orchestrator | Tuesday 13 May 2025 23:56:08 +0000 (0:00:00.767) 0:00:13.275 *********** 2025-05-13 23:58:15.091662 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-05-13 23:58:15.091673 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-05-13 23:58:15.091684 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:58:15.091695 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:58:15.091706 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:58:15.091717 | orchestrator | 2025-05-13 23:58:15.091727 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-05-13 23:58:15.091738 | orchestrator | Tuesday 13 May 2025 23:56:09 +0000 (0:00:00.971) 0:00:14.246 *********** 2025-05-13 23:58:15.091749 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:58:15.091760 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:58:15.091771 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:58:15.091782 | orchestrator | 2025-05-13 23:58:15.091792 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-05-13 23:58:15.091803 | orchestrator | Tuesday 13 May 2025 23:56:09 +0000 (0:00:00.470) 0:00:14.716 *********** 2025-05-13 23:58:15.091815 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1088175, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.1173356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.091828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1088175, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.1173356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.091847 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1088175, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.1173356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.091860 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1088155, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.1113355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.091883 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1088155, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.1113355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.091911 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1088155, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.1113355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.091932 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1088150, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.1043353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.091952 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1088150, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.1043353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.091982 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1088150, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.1043353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.092000 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1088164, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.1143355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.092012 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1088164, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.1143355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.092039 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1088164, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.1143355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.092051 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1088133, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.0953352, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.092063 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1088133, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.0953352, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.092087 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1088133, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.0953352, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.092099 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1088151, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.1073353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.092110 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1088151, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.1073353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.092128 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1088151, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.1073353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.092146 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1088162, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.1133356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.092159 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1088162, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.1133356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.092178 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1088162, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.1133356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.092189 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1088132, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.0933352, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.092201 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1088132, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.0933352, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.092212 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1088132, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.0933352, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.092236 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1088123, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.087335, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.092248 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1088123, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.087335, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.092267 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1088123, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.087335, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.092279 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1088135, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.0973353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.092292 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1088135, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.0973353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.092312 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1088135, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.0973353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.092350 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1088126, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.0913353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.092369 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1088126, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.0913353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.092389 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1088157, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.1123354, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.092401 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1088126, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.0913353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.092412 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1088157, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.1123354, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.092512 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1088143, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.1023355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.092553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1088157, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.1123354, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.092583 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1088143, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.1023355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.092604 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1088143, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.1023355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.092616 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1088169, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.1153355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.092628 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1088169, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.1153355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.092639 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1088131, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.0933352, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.092651 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1088169, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.1153355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.092676 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1088131, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.0933352, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.092695 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1088131, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.0933352, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.092707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1088153, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.1093354, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.092719 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1088153, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.1093354, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.092731 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1088153, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.1093354, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.092742 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1088124, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.0903351, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.092767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1088124, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.0903351, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.092779 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1088124, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.0903351, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.092808 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1088127, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.0933352, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.092830 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1088127, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.0933352, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.092852 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1088127, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.0933352, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.092870 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1088146, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.1033354, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.092897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1088146, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.1033354, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.092916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1088146, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.1033354, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.092943 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1088355, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.385339, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.092965 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1088355, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.385339, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.092985 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1088355, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.385339, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.093005 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1088343, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.373339, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.093023 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1088343, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.373339, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.093052 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1088179, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.1203356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.093072 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1088343, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.373339, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.093084 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1088179, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.1203356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.093096 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1088414, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.3993392, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.093107 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1088179, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.1203356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.093119 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1088414, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.3993392, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.093148 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1088183, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.1203356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.093161 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1088414, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.3993392, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.093172 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1088183, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.1203356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.093184 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1088405, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.3973393, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.093195 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1088183, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.1203356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.093207 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1088405, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.3973393, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.093698 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1088417, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4023392, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.093729 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1088405, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.3973393, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.093742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1088417, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4023392, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.093754 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1088375, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.3923392, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.093766 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1088417, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4023392, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.093778 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1088375, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.3923392, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.093815 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1088404, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.3963392, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.093828 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1088375, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.3923392, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.093839 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1088404, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.3963392, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.093851 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1088186, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.1213355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.093862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1088404, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.3963392, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.093874 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1088186, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.1213355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.093898 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1088347, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.375339, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.093915 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1088186, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.1213355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.093927 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1088347, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.375339, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.093946 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1088431, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4033394, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.093965 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1088347, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.375339, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.093985 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1088431, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4033394, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.094014 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1088409, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.3973393, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.094140 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1088431, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4033394, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.094160 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1088409, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.3973393, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.094172 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1088193, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.1263356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.094184 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1088409, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.3973393, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.094196 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1088193, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.1263356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.094216 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1088191, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.1223357, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.094240 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1088193, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.1263356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.094252 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1088191, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.1223357, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.094263 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1088217, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.3333385, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.094275 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1088191, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.1223357, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.094289 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1088217, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.3333385, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.094310 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1088235, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.372339, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.094338 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1088235, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.372339, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.094352 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1088217, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.3333385, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.094365 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1088351, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.376339, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.094378 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1088351, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.376339, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.094391 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1088235, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.372339, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.094410 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1088402, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.3953393, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.094500 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1088402, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.3953393, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.094524 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1088351, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.376339, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.094537 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1088354, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.376339, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.094551 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1088354, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.376339, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.094564 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1088402, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.3953393, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.094585 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1088438, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4253397, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.094598 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1088438, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4253397, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.094623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1088354, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.376339, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.094638 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1088438, 'dev': 174, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747176776.4253397, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 23:58:15.094651 | orchestrator | 2025-05-13 23:58:15.094665 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-05-13 23:58:15.094677 | orchestrator | Tuesday 13 May 2025 23:56:47 +0000 (0:00:37.476) 0:00:52.192 *********** 2025-05-13 23:58:15.094688 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-13 23:58:15.094700 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-13 23:58:15.094719 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-13 23:58:15.094731 | orchestrator | 2025-05-13 23:58:15.094745 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-05-13 23:58:15.094765 | orchestrator | Tuesday 13 May 2025 23:56:48 +0000 (0:00:00.986) 0:00:53.179 *********** 2025-05-13 23:58:15.094783 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:58:15.094802 | orchestrator | 2025-05-13 23:58:15.094819 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-05-13 23:58:15.094836 | orchestrator | Tuesday 13 May 2025 23:56:50 +0000 (0:00:02.141) 0:00:55.321 *********** 2025-05-13 23:58:15.094852 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:58:15.094868 | orchestrator | 2025-05-13 23:58:15.094886 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-05-13 23:58:15.094903 | orchestrator | Tuesday 13 May 2025 23:56:52 +0000 (0:00:02.623) 0:00:57.944 *********** 2025-05-13 23:58:15.094921 | orchestrator | 2025-05-13 23:58:15.094932 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-05-13 23:58:15.094950 | orchestrator | Tuesday 13 May 2025 23:56:52 +0000 (0:00:00.060) 0:00:58.005 *********** 2025-05-13 23:58:15.094960 | orchestrator | 2025-05-13 23:58:15.094970 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-05-13 23:58:15.094985 | orchestrator | Tuesday 13 May 2025 23:56:53 +0000 (0:00:00.072) 0:00:58.077 *********** 2025-05-13 23:58:15.094995 | orchestrator | 2025-05-13 23:58:15.095005 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-05-13 23:58:15.095014 | orchestrator | Tuesday 13 May 2025 23:56:53 +0000 (0:00:00.067) 0:00:58.145 *********** 2025-05-13 23:58:15.095024 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:58:15.095033 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:58:15.095043 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:58:15.095052 | orchestrator | 2025-05-13 23:58:15.095062 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-05-13 23:58:15.095072 | orchestrator | Tuesday 13 May 2025 23:56:54 +0000 (0:00:01.814) 0:00:59.960 *********** 2025-05-13 23:58:15.095081 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:58:15.095090 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:58:15.095100 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-05-13 23:58:15.095110 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-05-13 23:58:15.095121 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2025-05-13 23:58:15.095130 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:58:15.095141 | orchestrator | 2025-05-13 23:58:15.095150 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-05-13 23:58:15.095160 | orchestrator | Tuesday 13 May 2025 23:57:32 +0000 (0:00:37.770) 0:01:37.730 *********** 2025-05-13 23:58:15.095180 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:58:15.095190 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:58:15.095200 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:58:15.095209 | orchestrator | 2025-05-13 23:58:15.095219 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-05-13 23:58:15.095228 | orchestrator | Tuesday 13 May 2025 23:58:08 +0000 (0:00:36.205) 0:02:13.936 *********** 2025-05-13 23:58:15.095238 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:58:15.095247 | orchestrator | 2025-05-13 23:58:15.095257 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-05-13 23:58:15.095266 | orchestrator | Tuesday 13 May 2025 23:58:11 +0000 (0:00:02.319) 0:02:16.256 *********** 2025-05-13 23:58:15.095276 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:58:15.095285 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:58:15.095294 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:58:15.095304 | orchestrator | 2025-05-13 23:58:15.095313 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-05-13 23:58:15.095323 | orchestrator | Tuesday 13 May 2025 23:58:11 +0000 (0:00:00.295) 0:02:16.552 *********** 2025-05-13 23:58:15.095333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-05-13 23:58:15.095345 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-05-13 23:58:15.095355 | orchestrator | 2025-05-13 23:58:15.095365 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-05-13 23:58:15.095375 | orchestrator | Tuesday 13 May 2025 23:58:13 +0000 (0:00:02.422) 0:02:18.975 *********** 2025-05-13 23:58:15.095385 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:58:15.095394 | orchestrator | 2025-05-13 23:58:15.095404 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 23:58:15.095414 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-13 23:58:15.095446 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-13 23:58:15.095458 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-13 23:58:15.095468 | orchestrator | 2025-05-13 23:58:15.095477 | orchestrator | 2025-05-13 23:58:15.095487 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 23:58:15.095498 | orchestrator | Tuesday 13 May 2025 23:58:14 +0000 (0:00:00.273) 0:02:19.248 *********** 2025-05-13 23:58:15.095508 | orchestrator | =============================================================================== 2025-05-13 23:58:15.095518 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 37.77s 2025-05-13 23:58:15.095528 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 37.48s 2025-05-13 23:58:15.095537 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 36.21s 2025-05-13 23:58:15.095547 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.62s 2025-05-13 23:58:15.095557 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.42s 2025-05-13 23:58:15.095574 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.32s 2025-05-13 23:58:15.095584 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.14s 2025-05-13 23:58:15.095606 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.81s 2025-05-13 23:58:15.095616 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.46s 2025-05-13 23:58:15.095626 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.44s 2025-05-13 23:58:15.095637 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.35s 2025-05-13 23:58:15.095647 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.25s 2025-05-13 23:58:15.095656 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.22s 2025-05-13 23:58:15.095666 | orchestrator | grafana : Check grafana containers -------------------------------------- 0.99s 2025-05-13 23:58:15.095676 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.97s 2025-05-13 23:58:15.095686 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.94s 2025-05-13 23:58:15.095696 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.85s 2025-05-13 23:58:15.095705 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.77s 2025-05-13 23:58:15.095715 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.77s 2025-05-13 23:58:15.095724 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.73s 2025-05-13 23:58:15.095736 | orchestrator | 2025-05-13 23:58:15 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:58:18.137015 | orchestrator | 2025-05-13 23:58:18 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-13 23:58:18.139190 | orchestrator | 2025-05-13 23:58:18 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:58:18.139229 | orchestrator | 2025-05-13 23:58:18 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:58:21.178236 | orchestrator | 2025-05-13 23:58:21 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-13 23:58:21.179322 | orchestrator | 2025-05-13 23:58:21 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:58:21.179347 | orchestrator | 2025-05-13 23:58:21 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:58:24.227988 | orchestrator | 2025-05-13 23:58:24 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-13 23:58:24.229336 | orchestrator | 2025-05-13 23:58:24 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:58:24.229357 | orchestrator | 2025-05-13 23:58:24 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:58:27.262643 | orchestrator | 2025-05-13 23:58:27 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-13 23:58:27.263451 | orchestrator | 2025-05-13 23:58:27 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:58:27.263495 | orchestrator | 2025-05-13 23:58:27 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:58:30.317070 | orchestrator | 2025-05-13 23:58:30 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-13 23:58:30.317233 | orchestrator | 2025-05-13 23:58:30 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:58:30.317245 | orchestrator | 2025-05-13 23:58:30 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:58:33.367770 | orchestrator | 2025-05-13 23:58:33 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-13 23:58:33.369355 | orchestrator | 2025-05-13 23:58:33 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state STARTED 2025-05-13 23:58:33.369862 | orchestrator | 2025-05-13 23:58:33 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:58:36.415807 | orchestrator | 2025-05-13 23:58:36 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-13 23:58:36.418944 | orchestrator | 2025-05-13 23:58:36 | INFO  | Task caf353ef-a173-473a-8fe0-54be960b8023 is in state SUCCESS 2025-05-13 23:58:36.420811 | orchestrator | 2025-05-13 23:58:36.420864 | orchestrator | 2025-05-13 23:58:36.420878 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-13 23:58:36.420891 | orchestrator | 2025-05-13 23:58:36.420903 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-05-13 23:58:36.420956 | orchestrator | Tuesday 13 May 2025 23:49:06 +0000 (0:00:00.505) 0:00:00.505 *********** 2025-05-13 23:58:36.420970 | orchestrator | changed: [testbed-manager] 2025-05-13 23:58:36.420983 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:58:36.420994 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:58:36.421005 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:58:36.421016 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:58:36.421026 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:58:36.421037 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:58:36.421048 | orchestrator | 2025-05-13 23:58:36.421158 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-13 23:58:36.421245 | orchestrator | Tuesday 13 May 2025 23:49:07 +0000 (0:00:01.031) 0:00:01.537 *********** 2025-05-13 23:58:36.421258 | orchestrator | changed: [testbed-manager] 2025-05-13 23:58:36.421269 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:58:36.421296 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:58:36.421308 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:58:36.421319 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:58:36.421329 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:58:36.421340 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:58:36.421350 | orchestrator | 2025-05-13 23:58:36.421362 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-13 23:58:36.421375 | orchestrator | Tuesday 13 May 2025 23:49:08 +0000 (0:00:00.801) 0:00:02.338 *********** 2025-05-13 23:58:36.421387 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-05-13 23:58:36.421423 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-05-13 23:58:36.421435 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-05-13 23:58:36.421448 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-05-13 23:58:36.421458 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-05-13 23:58:36.421494 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-05-13 23:58:36.421524 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-05-13 23:58:36.421535 | orchestrator | 2025-05-13 23:58:36.421546 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-05-13 23:58:36.421568 | orchestrator | 2025-05-13 23:58:36.421579 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-05-13 23:58:36.421589 | orchestrator | Tuesday 13 May 2025 23:49:08 +0000 (0:00:00.662) 0:00:03.000 *********** 2025-05-13 23:58:36.421600 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:58:36.421610 | orchestrator | 2025-05-13 23:58:36.421621 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-05-13 23:58:36.421650 | orchestrator | Tuesday 13 May 2025 23:49:09 +0000 (0:00:00.690) 0:00:03.691 *********** 2025-05-13 23:58:36.421661 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-05-13 23:58:36.421673 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-05-13 23:58:36.421683 | orchestrator | 2025-05-13 23:58:36.421694 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-05-13 23:58:36.421718 | orchestrator | Tuesday 13 May 2025 23:49:13 +0000 (0:00:03.736) 0:00:07.428 *********** 2025-05-13 23:58:36.421729 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-13 23:58:36.421799 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-13 23:58:36.421812 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:58:36.421924 | orchestrator | 2025-05-13 23:58:36.421936 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-05-13 23:58:36.421947 | orchestrator | Tuesday 13 May 2025 23:49:17 +0000 (0:00:03.838) 0:00:11.267 *********** 2025-05-13 23:58:36.421958 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:58:36.421969 | orchestrator | 2025-05-13 23:58:36.421979 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-05-13 23:58:36.421990 | orchestrator | Tuesday 13 May 2025 23:49:17 +0000 (0:00:00.854) 0:00:12.122 *********** 2025-05-13 23:58:36.422001 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:58:36.422011 | orchestrator | 2025-05-13 23:58:36.422069 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-05-13 23:58:36.422081 | orchestrator | Tuesday 13 May 2025 23:49:19 +0000 (0:00:01.509) 0:00:13.631 *********** 2025-05-13 23:58:36.422092 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:58:36.422103 | orchestrator | 2025-05-13 23:58:36.422114 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-05-13 23:58:36.422125 | orchestrator | Tuesday 13 May 2025 23:49:22 +0000 (0:00:03.466) 0:00:17.098 *********** 2025-05-13 23:58:36.422135 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:58:36.422146 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:58:36.422157 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:58:36.422167 | orchestrator | 2025-05-13 23:58:36.422178 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-05-13 23:58:36.422189 | orchestrator | Tuesday 13 May 2025 23:49:23 +0000 (0:00:00.641) 0:00:17.739 *********** 2025-05-13 23:58:36.422230 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:58:36.422243 | orchestrator | 2025-05-13 23:58:36.422254 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-05-13 23:58:36.422264 | orchestrator | Tuesday 13 May 2025 23:49:52 +0000 (0:00:29.100) 0:00:46.839 *********** 2025-05-13 23:58:36.422275 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:58:36.422285 | orchestrator | 2025-05-13 23:58:36.422296 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-05-13 23:58:36.422306 | orchestrator | Tuesday 13 May 2025 23:50:06 +0000 (0:00:13.455) 0:01:00.295 *********** 2025-05-13 23:58:36.422317 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:58:36.422328 | orchestrator | 2025-05-13 23:58:36.422338 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-05-13 23:58:36.422349 | orchestrator | Tuesday 13 May 2025 23:50:18 +0000 (0:00:11.972) 0:01:12.267 *********** 2025-05-13 23:58:36.422376 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:58:36.422387 | orchestrator | 2025-05-13 23:58:36.422420 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-05-13 23:58:36.422431 | orchestrator | Tuesday 13 May 2025 23:50:19 +0000 (0:00:01.090) 0:01:13.358 *********** 2025-05-13 23:58:36.422442 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:58:36.422453 | orchestrator | 2025-05-13 23:58:36.422463 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-05-13 23:58:36.422474 | orchestrator | Tuesday 13 May 2025 23:50:19 +0000 (0:00:00.493) 0:01:13.851 *********** 2025-05-13 23:58:36.422485 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:58:36.422496 | orchestrator | 2025-05-13 23:58:36.422507 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-05-13 23:58:36.422519 | orchestrator | Tuesday 13 May 2025 23:50:20 +0000 (0:00:00.555) 0:01:14.407 *********** 2025-05-13 23:58:36.422529 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:58:36.422540 | orchestrator | 2025-05-13 23:58:36.422558 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-05-13 23:58:36.422569 | orchestrator | Tuesday 13 May 2025 23:50:37 +0000 (0:00:17.658) 0:01:32.066 *********** 2025-05-13 23:58:36.422590 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:58:36.422601 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:58:36.422612 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:58:36.422623 | orchestrator | 2025-05-13 23:58:36.422634 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-05-13 23:58:36.422645 | orchestrator | 2025-05-13 23:58:36.422656 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-05-13 23:58:36.422666 | orchestrator | Tuesday 13 May 2025 23:50:38 +0000 (0:00:00.306) 0:01:32.373 *********** 2025-05-13 23:58:36.422677 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:58:36.422688 | orchestrator | 2025-05-13 23:58:36.422698 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-05-13 23:58:36.422709 | orchestrator | Tuesday 13 May 2025 23:50:38 +0000 (0:00:00.613) 0:01:32.986 *********** 2025-05-13 23:58:36.422852 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:58:36.422864 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:58:36.422875 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:58:36.422885 | orchestrator | 2025-05-13 23:58:36.422896 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-05-13 23:58:36.422907 | orchestrator | Tuesday 13 May 2025 23:50:40 +0000 (0:00:01.911) 0:01:34.898 *********** 2025-05-13 23:58:36.422917 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:58:36.422928 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:58:36.422938 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:58:36.422949 | orchestrator | 2025-05-13 23:58:36.422960 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-05-13 23:58:36.422970 | orchestrator | Tuesday 13 May 2025 23:50:42 +0000 (0:00:01.965) 0:01:36.863 *********** 2025-05-13 23:58:36.422981 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:58:36.422992 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:58:36.423002 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:58:36.423013 | orchestrator | 2025-05-13 23:58:36.423024 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-05-13 23:58:36.423034 | orchestrator | Tuesday 13 May 2025 23:50:43 +0000 (0:00:00.358) 0:01:37.222 *********** 2025-05-13 23:58:36.423044 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-13 23:58:36.423055 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:58:36.423066 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-13 23:58:36.423076 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:58:36.423087 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-05-13 23:58:36.423098 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-05-13 23:58:36.423109 | orchestrator | 2025-05-13 23:58:36.423119 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-05-13 23:58:36.423130 | orchestrator | Tuesday 13 May 2025 23:50:51 +0000 (0:00:07.958) 0:01:45.180 *********** 2025-05-13 23:58:36.423141 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:58:36.423151 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:58:36.423162 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:58:36.423172 | orchestrator | 2025-05-13 23:58:36.423184 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-05-13 23:58:36.423194 | orchestrator | Tuesday 13 May 2025 23:50:51 +0000 (0:00:00.300) 0:01:45.481 *********** 2025-05-13 23:58:36.423205 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-05-13 23:58:36.423215 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:58:36.423226 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-13 23:58:36.423237 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:58:36.423247 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-13 23:58:36.423258 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:58:36.423269 | orchestrator | 2025-05-13 23:58:36.423279 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-05-13 23:58:36.423290 | orchestrator | Tuesday 13 May 2025 23:50:52 +0000 (0:00:00.739) 0:01:46.221 *********** 2025-05-13 23:58:36.423308 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:58:36.423318 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:58:36.423329 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:58:36.423340 | orchestrator | 2025-05-13 23:58:36.423351 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-05-13 23:58:36.423361 | orchestrator | Tuesday 13 May 2025 23:50:52 +0000 (0:00:00.599) 0:01:46.821 *********** 2025-05-13 23:58:36.423372 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:58:36.423382 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:58:36.423444 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:58:36.423457 | orchestrator | 2025-05-13 23:58:36.423505 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-05-13 23:58:36.423516 | orchestrator | Tuesday 13 May 2025 23:50:53 +0000 (0:00:01.005) 0:01:47.826 *********** 2025-05-13 23:58:36.423527 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:58:36.423538 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:58:36.423567 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:58:36.423578 | orchestrator | 2025-05-13 23:58:36.423589 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-05-13 23:58:36.423600 | orchestrator | Tuesday 13 May 2025 23:50:55 +0000 (0:00:02.200) 0:01:50.027 *********** 2025-05-13 23:58:36.423611 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:58:36.423621 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:58:36.423632 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:58:36.423643 | orchestrator | 2025-05-13 23:58:36.423654 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-05-13 23:58:36.423664 | orchestrator | Tuesday 13 May 2025 23:51:15 +0000 (0:00:19.762) 0:02:09.790 *********** 2025-05-13 23:58:36.423675 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:58:36.423686 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:58:36.423697 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:58:36.423707 | orchestrator | 2025-05-13 23:58:36.423718 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-05-13 23:58:36.423729 | orchestrator | Tuesday 13 May 2025 23:51:25 +0000 (0:00:10.242) 0:02:20.032 *********** 2025-05-13 23:58:36.423746 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:58:36.423757 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:58:36.423768 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:58:36.423778 | orchestrator | 2025-05-13 23:58:36.423789 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-05-13 23:58:36.423800 | orchestrator | Tuesday 13 May 2025 23:51:27 +0000 (0:00:01.406) 0:02:21.439 *********** 2025-05-13 23:58:36.423811 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:58:36.423821 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:58:36.423832 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:58:36.423843 | orchestrator | 2025-05-13 23:58:36.423853 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-05-13 23:58:36.423864 | orchestrator | Tuesday 13 May 2025 23:51:38 +0000 (0:00:10.958) 0:02:32.398 *********** 2025-05-13 23:58:36.423875 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:58:36.423885 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:58:36.423896 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:58:36.423906 | orchestrator | 2025-05-13 23:58:36.423917 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-05-13 23:58:36.423928 | orchestrator | Tuesday 13 May 2025 23:51:39 +0000 (0:00:01.578) 0:02:33.976 *********** 2025-05-13 23:58:36.423938 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:58:36.423949 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:58:36.423960 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:58:36.423970 | orchestrator | 2025-05-13 23:58:36.423981 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-05-13 23:58:36.423992 | orchestrator | 2025-05-13 23:58:36.424002 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-05-13 23:58:36.424021 | orchestrator | Tuesday 13 May 2025 23:51:40 +0000 (0:00:00.494) 0:02:34.471 *********** 2025-05-13 23:58:36.424032 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:58:36.424044 | orchestrator | 2025-05-13 23:58:36.424055 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-05-13 23:58:36.424066 | orchestrator | Tuesday 13 May 2025 23:51:40 +0000 (0:00:00.532) 0:02:35.003 *********** 2025-05-13 23:58:36.424076 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-05-13 23:58:36.424087 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-05-13 23:58:36.424098 | orchestrator | 2025-05-13 23:58:36.424108 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-05-13 23:58:36.424119 | orchestrator | Tuesday 13 May 2025 23:51:44 +0000 (0:00:03.340) 0:02:38.344 *********** 2025-05-13 23:58:36.424130 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-05-13 23:58:36.424143 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-05-13 23:58:36.424154 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-05-13 23:58:36.424166 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-05-13 23:58:36.424176 | orchestrator | 2025-05-13 23:58:36.424187 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-05-13 23:58:36.424198 | orchestrator | Tuesday 13 May 2025 23:51:50 +0000 (0:00:06.261) 0:02:44.605 *********** 2025-05-13 23:58:36.424208 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-13 23:58:36.424219 | orchestrator | 2025-05-13 23:58:36.424230 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-05-13 23:58:36.424241 | orchestrator | Tuesday 13 May 2025 23:51:53 +0000 (0:00:03.158) 0:02:47.764 *********** 2025-05-13 23:58:36.424252 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-13 23:58:36.424262 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-05-13 23:58:36.424273 | orchestrator | 2025-05-13 23:58:36.424283 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-05-13 23:58:36.424294 | orchestrator | Tuesday 13 May 2025 23:51:57 +0000 (0:00:03.818) 0:02:51.582 *********** 2025-05-13 23:58:36.424304 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-13 23:58:36.424315 | orchestrator | 2025-05-13 23:58:36.424326 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-05-13 23:58:36.424337 | orchestrator | Tuesday 13 May 2025 23:52:00 +0000 (0:00:03.281) 0:02:54.863 *********** 2025-05-13 23:58:36.424347 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-05-13 23:58:36.424358 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-05-13 23:58:36.424369 | orchestrator | 2025-05-13 23:58:36.424380 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-05-13 23:58:36.424418 | orchestrator | Tuesday 13 May 2025 23:52:08 +0000 (0:00:07.608) 0:03:02.471 *********** 2025-05-13 23:58:36.424442 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-13 23:58:36.424468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-13 23:58:36.424482 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-13 23:58:36.424503 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-13 23:58:36.424521 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-13 23:58:36.424540 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-13 23:58:36.424552 | orchestrator | 2025-05-13 23:58:36.424563 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-05-13 23:58:36.424574 | orchestrator | Tuesday 13 May 2025 23:52:09 +0000 (0:00:01.586) 0:03:04.058 *********** 2025-05-13 23:58:36.424585 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:58:36.424596 | orchestrator | 2025-05-13 23:58:36.424606 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-05-13 23:58:36.424617 | orchestrator | Tuesday 13 May 2025 23:52:10 +0000 (0:00:00.362) 0:03:04.420 *********** 2025-05-13 23:58:36.424628 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:58:36.424639 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:58:36.424650 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:58:36.424660 | orchestrator | 2025-05-13 23:58:36.424671 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-05-13 23:58:36.424681 | orchestrator | Tuesday 13 May 2025 23:52:11 +0000 (0:00:01.205) 0:03:05.626 *********** 2025-05-13 23:58:36.424692 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-13 23:58:36.424703 | orchestrator | 2025-05-13 23:58:36.424713 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-05-13 23:58:36.424724 | orchestrator | Tuesday 13 May 2025 23:52:12 +0000 (0:00:00.722) 0:03:06.349 *********** 2025-05-13 23:58:36.424734 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:58:36.424745 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:58:36.424755 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:58:36.424766 | orchestrator | 2025-05-13 23:58:36.424777 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-05-13 23:58:36.424787 | orchestrator | Tuesday 13 May 2025 23:52:12 +0000 (0:00:00.339) 0:03:06.688 *********** 2025-05-13 23:58:36.424798 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:58:36.424809 | orchestrator | 2025-05-13 23:58:36.424819 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-05-13 23:58:36.424830 | orchestrator | Tuesday 13 May 2025 23:52:14 +0000 (0:00:02.044) 0:03:08.732 *********** 2025-05-13 23:58:36.424842 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-13 23:58:36.424874 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-13 23:58:36.424888 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-13 23:58:36.424901 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-13 23:58:36.424913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-13 23:58:36.424930 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-13 23:58:36.424948 | orchestrator | 2025-05-13 23:58:36.424959 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-05-13 23:58:36.424970 | orchestrator | Tuesday 13 May 2025 23:52:17 +0000 (0:00:02.750) 0:03:11.483 *********** 2025-05-13 23:58:36.424994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-13 23:58:36.425006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-13 23:58:36.425018 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:58:36.425030 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-13 23:58:36.425042 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-13 23:58:36.425060 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:58:36.425084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-13 23:58:36.425098 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-13 23:58:36.425109 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:58:36.425120 | orchestrator | 2025-05-13 23:58:36.425131 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-05-13 23:58:36.425142 | orchestrator | Tuesday 13 May 2025 23:52:18 +0000 (0:00:01.035) 0:03:12.518 *********** 2025-05-13 23:58:36.425153 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-13 23:58:36.425166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-13 23:58:36.425184 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:58:36.425208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-13 23:58:36.425221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-13 23:58:36.425233 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:58:36.425245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-13 23:58:36.425256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-13 23:58:36.425273 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:58:36.425285 | orchestrator | 2025-05-13 23:58:36.425295 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-05-13 23:58:36.425306 | orchestrator | Tuesday 13 May 2025 23:52:19 +0000 (0:00:01.425) 0:03:13.943 *********** 2025-05-13 23:58:36.425325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-13 23:58:36.425343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-13 23:58:36.425356 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-13 23:58:36.425375 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-13 23:58:36.425411 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-13 23:58:36.425424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-13 23:58:36.425436 | orchestrator | 2025-05-13 23:58:36.425452 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-05-13 23:58:36.425463 | orchestrator | Tuesday 13 May 2025 23:52:22 +0000 (0:00:02.714) 0:03:16.658 *********** 2025-05-13 23:58:36.425475 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-13 23:58:36.425487 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-13 23:58:36.425513 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-13 23:58:36.425530 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-13 23:58:36.425542 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-13 23:58:36.425554 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-13 23:58:36.425566 | orchestrator | 2025-05-13 23:58:36.425577 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-05-13 23:58:36.425588 | orchestrator | Tuesday 13 May 2025 23:52:32 +0000 (0:00:10.013) 0:03:26.672 *********** 2025-05-13 23:58:36.425599 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-13 23:58:36.425623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-13 23:58:36.425634 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:58:36.425651 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-13 23:58:36.425663 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-13 23:58:36.425675 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:58:36.425686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-13 23:58:36.425705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-13 23:58:36.425716 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:58:36.425727 | orchestrator | 2025-05-13 23:58:36.425738 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-05-13 23:58:36.425749 | orchestrator | Tuesday 13 May 2025 23:52:34 +0000 (0:00:01.503) 0:03:28.176 *********** 2025-05-13 23:58:36.425760 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:58:36.425771 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:58:36.425781 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:58:36.425792 | orchestrator | 2025-05-13 23:58:36.425809 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-05-13 23:58:36.425820 | orchestrator | Tuesday 13 May 2025 23:52:36 +0000 (0:00:02.873) 0:03:31.049 *********** 2025-05-13 23:58:36.425831 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:58:36.425842 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:58:36.425853 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:58:36.425864 | orchestrator | 2025-05-13 23:58:36.425875 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-05-13 23:58:36.425885 | orchestrator | Tuesday 13 May 2025 23:52:37 +0000 (0:00:00.555) 0:03:31.605 *********** 2025-05-13 23:58:36.425901 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-13 23:58:36.425914 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-13 23:58:36.425944 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-13 23:58:36.425957 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-13 23:58:36.425974 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-13 23:58:36.425986 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-13 23:58:36.426005 | orchestrator | 2025-05-13 23:58:36.426048 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-05-13 23:58:36.426063 | orchestrator | Tuesday 13 May 2025 23:52:39 +0000 (0:00:02.295) 0:03:33.900 *********** 2025-05-13 23:58:36.426074 | orchestrator | 2025-05-13 23:58:36.426084 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-05-13 23:58:36.426095 | orchestrator | Tuesday 13 May 2025 23:52:40 +0000 (0:00:00.391) 0:03:34.292 *********** 2025-05-13 23:58:36.426106 | orchestrator | 2025-05-13 23:58:36.426117 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-05-13 23:58:36.426127 | orchestrator | Tuesday 13 May 2025 23:52:40 +0000 (0:00:00.393) 0:03:34.685 *********** 2025-05-13 23:58:36.426138 | orchestrator | 2025-05-13 23:58:36.426148 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-05-13 23:58:36.426159 | orchestrator | Tuesday 13 May 2025 23:52:40 +0000 (0:00:00.377) 0:03:35.063 *********** 2025-05-13 23:58:36.426170 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:58:36.426181 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:58:36.426191 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:58:36.426202 | orchestrator | 2025-05-13 23:58:36.426213 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-05-13 23:58:36.426223 | orchestrator | Tuesday 13 May 2025 23:53:09 +0000 (0:00:28.106) 0:04:03.170 *********** 2025-05-13 23:58:36.426234 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:58:36.426244 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:58:36.426255 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:58:36.426265 | orchestrator | 2025-05-13 23:58:36.426276 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-05-13 23:58:36.426287 | orchestrator | 2025-05-13 23:58:36.426297 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-13 23:58:36.426308 | orchestrator | Tuesday 13 May 2025 23:53:14 +0000 (0:00:05.734) 0:04:08.905 *********** 2025-05-13 23:58:36.426319 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:58:36.426330 | orchestrator | 2025-05-13 23:58:36.426341 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-13 23:58:36.426352 | orchestrator | Tuesday 13 May 2025 23:53:16 +0000 (0:00:01.241) 0:04:10.146 *********** 2025-05-13 23:58:36.426362 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:58:36.426373 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:58:36.426384 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:58:36.426415 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:58:36.426427 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:58:36.426437 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:58:36.426448 | orchestrator | 2025-05-13 23:58:36.426459 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-05-13 23:58:36.426469 | orchestrator | Tuesday 13 May 2025 23:53:16 +0000 (0:00:00.716) 0:04:10.862 *********** 2025-05-13 23:58:36.426480 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:58:36.426490 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:58:36.426501 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:58:36.426512 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:58:36.426522 | orchestrator | 2025-05-13 23:58:36.426533 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-05-13 23:58:36.426551 | orchestrator | Tuesday 13 May 2025 23:53:17 +0000 (0:00:00.981) 0:04:11.843 *********** 2025-05-13 23:58:36.426562 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-05-13 23:58:36.426573 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-05-13 23:58:36.426584 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-05-13 23:58:36.426602 | orchestrator | 2025-05-13 23:58:36.426613 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-05-13 23:58:36.426624 | orchestrator | Tuesday 13 May 2025 23:53:18 +0000 (0:00:00.648) 0:04:12.492 *********** 2025-05-13 23:58:36.426634 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-05-13 23:58:36.426645 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-05-13 23:58:36.426656 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-05-13 23:58:36.426667 | orchestrator | 2025-05-13 23:58:36.426677 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-05-13 23:58:36.426688 | orchestrator | Tuesday 13 May 2025 23:53:19 +0000 (0:00:01.189) 0:04:13.682 *********** 2025-05-13 23:58:36.426698 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-05-13 23:58:36.426714 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:58:36.426725 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-05-13 23:58:36.426736 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:58:36.426747 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-05-13 23:58:36.426757 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:58:36.426768 | orchestrator | 2025-05-13 23:58:36.426779 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-05-13 23:58:36.426789 | orchestrator | Tuesday 13 May 2025 23:53:20 +0000 (0:00:00.706) 0:04:14.388 *********** 2025-05-13 23:58:36.426800 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-13 23:58:36.426811 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-13 23:58:36.426821 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:58:36.426832 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-13 23:58:36.426843 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-13 23:58:36.426853 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:58:36.426864 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-05-13 23:58:36.426875 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-13 23:58:36.426885 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-05-13 23:58:36.426896 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-13 23:58:36.426906 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:58:36.426917 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-05-13 23:58:36.426928 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-05-13 23:58:36.426938 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-05-13 23:58:36.426949 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-05-13 23:58:36.426959 | orchestrator | 2025-05-13 23:58:36.426970 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-05-13 23:58:36.426981 | orchestrator | Tuesday 13 May 2025 23:53:21 +0000 (0:00:01.104) 0:04:15.492 *********** 2025-05-13 23:58:36.426991 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:58:36.427005 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:58:36.427025 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:58:36.427044 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:58:36.427062 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:58:36.427079 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:58:36.427096 | orchestrator | 2025-05-13 23:58:36.427114 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-05-13 23:58:36.427130 | orchestrator | Tuesday 13 May 2025 23:53:22 +0000 (0:00:01.589) 0:04:17.082 *********** 2025-05-13 23:58:36.427146 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:58:36.427164 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:58:36.427193 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:58:36.427211 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:58:36.427230 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:58:36.427247 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:58:36.427266 | orchestrator | 2025-05-13 23:58:36.427278 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-05-13 23:58:36.427289 | orchestrator | Tuesday 13 May 2025 23:53:24 +0000 (0:00:01.594) 0:04:18.677 *********** 2025-05-13 23:58:36.427300 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-13 23:58:36.427329 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-13 23:58:36.427343 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-13 23:58:36.427356 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-13 23:58:36.427368 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-13 23:58:36.427387 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-13 23:58:36.427476 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-13 23:58:36.427504 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-13 23:58:36.427517 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-13 23:58:36.427529 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-13 23:58:36.427541 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-13 23:58:36.427560 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-13 23:58:36.427572 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-13 23:58:36.427792 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-13 23:58:36.427816 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-13 23:58:36.427827 | orchestrator | 2025-05-13 23:58:36.427837 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-13 23:58:36.427846 | orchestrator | Tuesday 13 May 2025 23:53:27 +0000 (0:00:02.759) 0:04:21.437 *********** 2025-05-13 23:58:36.427857 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 23:58:36.427867 | orchestrator | 2025-05-13 23:58:36.427877 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-05-13 23:58:36.427887 | orchestrator | Tuesday 13 May 2025 23:53:28 +0000 (0:00:01.244) 0:04:22.682 *********** 2025-05-13 23:58:36.427897 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-13 23:58:36.427915 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-13 23:58:36.427955 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-13 23:58:36.427972 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-13 23:58:36.427982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-13 23:58:36.427992 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-13 23:58:36.428009 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-13 23:58:36.428021 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-13 23:58:36.428031 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-13 23:58:36.428066 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-13 23:58:36.428078 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-13 23:58:36.428094 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-13 23:58:36.428104 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-13 23:58:36.428125 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-13 23:58:36.428136 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-13 23:58:36.428146 | orchestrator | 2025-05-13 23:58:36.428156 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-05-13 23:58:36.428165 | orchestrator | Tuesday 13 May 2025 23:53:32 +0000 (0:00:03.987) 0:04:26.669 *********** 2025-05-13 23:58:36.428202 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-13 23:58:36.428220 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-13 23:58:36.428230 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-13 23:58:36.428246 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:58:36.428257 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-13 23:58:36.428268 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-13 23:58:36.428329 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-13 23:58:36.428342 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:58:36.428360 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-13 23:58:36.428372 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-13 23:58:36.428389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-13 23:58:36.428423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-13 23:58:36.428434 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:58:36.428445 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-13 23:58:36.428456 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:58:36.428500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-13 23:58:36.428518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-13 23:58:36.428530 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:58:36.428542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-13 23:58:36.428560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-13 23:58:36.428571 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:58:36.428582 | orchestrator | 2025-05-13 23:58:36.428592 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-05-13 23:58:36.428603 | orchestrator | Tuesday 13 May 2025 23:53:35 +0000 (0:00:02.601) 0:04:29.271 *********** 2025-05-13 23:58:36.428614 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-13 23:58:36.428626 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-13 23:58:36.428667 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-13 23:58:36.428680 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:58:36.428697 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-13 23:58:36.428716 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-13 23:58:36.428727 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-13 23:58:36.428737 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:58:36.428747 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-13 23:58:36.428757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-13 23:58:36.428767 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:58:36.428804 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-13 23:58:36.428827 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-13 23:58:36.428838 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-13 23:58:36.428848 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:58:36.428857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-13 23:58:36.428868 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-13 23:58:36.428878 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:58:36.428888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-13 23:58:36.428926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-13 23:58:36.428944 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:58:36.428954 | orchestrator | 2025-05-13 23:58:36.428963 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-13 23:58:36.428973 | orchestrator | Tuesday 13 May 2025 23:53:37 +0000 (0:00:02.433) 0:04:31.704 *********** 2025-05-13 23:58:36.428982 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:58:36.428992 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:58:36.429006 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:58:36.429021 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 23:58:36.429040 | orchestrator | 2025-05-13 23:58:36.429058 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-05-13 23:58:36.429075 | orchestrator | Tuesday 13 May 2025 23:53:38 +0000 (0:00:01.278) 0:04:32.982 *********** 2025-05-13 23:58:36.429093 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-13 23:58:36.429110 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-13 23:58:36.429126 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-13 23:58:36.429143 | orchestrator | 2025-05-13 23:58:36.429160 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-05-13 23:58:36.429178 | orchestrator | Tuesday 13 May 2025 23:53:41 +0000 (0:00:02.351) 0:04:35.334 *********** 2025-05-13 23:58:36.429194 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-13 23:58:36.429212 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-13 23:58:36.429223 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-13 23:58:36.429232 | orchestrator | 2025-05-13 23:58:36.429242 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-05-13 23:58:36.429251 | orchestrator | Tuesday 13 May 2025 23:53:42 +0000 (0:00:01.511) 0:04:36.846 *********** 2025-05-13 23:58:36.429261 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:58:36.429270 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:58:36.429280 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:58:36.429289 | orchestrator | 2025-05-13 23:58:36.429298 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-05-13 23:58:36.429308 | orchestrator | Tuesday 13 May 2025 23:53:43 +0000 (0:00:00.633) 0:04:37.480 *********** 2025-05-13 23:58:36.429317 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:58:36.429326 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:58:36.429335 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:58:36.429345 | orchestrator | 2025-05-13 23:58:36.429354 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-05-13 23:58:36.429363 | orchestrator | Tuesday 13 May 2025 23:53:44 +0000 (0:00:00.716) 0:04:38.197 *********** 2025-05-13 23:58:36.429373 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-05-13 23:58:36.429383 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-05-13 23:58:36.429392 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-05-13 23:58:36.429423 | orchestrator | 2025-05-13 23:58:36.429433 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-05-13 23:58:36.429442 | orchestrator | Tuesday 13 May 2025 23:53:45 +0000 (0:00:01.385) 0:04:39.582 *********** 2025-05-13 23:58:36.429451 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-05-13 23:58:36.429461 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-05-13 23:58:36.429470 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-05-13 23:58:36.429479 | orchestrator | 2025-05-13 23:58:36.429489 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-05-13 23:58:36.429498 | orchestrator | Tuesday 13 May 2025 23:53:46 +0000 (0:00:01.239) 0:04:40.822 *********** 2025-05-13 23:58:36.429507 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-05-13 23:58:36.429525 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-05-13 23:58:36.429534 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-05-13 23:58:36.429544 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-05-13 23:58:36.429553 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-05-13 23:58:36.429563 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-05-13 23:58:36.429572 | orchestrator | 2025-05-13 23:58:36.429581 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-05-13 23:58:36.429591 | orchestrator | Tuesday 13 May 2025 23:53:50 +0000 (0:00:04.179) 0:04:45.001 *********** 2025-05-13 23:58:36.429600 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:58:36.429609 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:58:36.429619 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:58:36.429628 | orchestrator | 2025-05-13 23:58:36.429638 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-05-13 23:58:36.429647 | orchestrator | Tuesday 13 May 2025 23:53:51 +0000 (0:00:00.298) 0:04:45.300 *********** 2025-05-13 23:58:36.429657 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:58:36.429666 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:58:36.429675 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:58:36.429685 | orchestrator | 2025-05-13 23:58:36.429694 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-05-13 23:58:36.429704 | orchestrator | Tuesday 13 May 2025 23:53:51 +0000 (0:00:00.415) 0:04:45.716 *********** 2025-05-13 23:58:36.429713 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:58:36.429723 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:58:36.429733 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:58:36.429742 | orchestrator | 2025-05-13 23:58:36.429792 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-05-13 23:58:36.429804 | orchestrator | Tuesday 13 May 2025 23:53:53 +0000 (0:00:01.448) 0:04:47.164 *********** 2025-05-13 23:58:36.429814 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-05-13 23:58:36.429824 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-05-13 23:58:36.429834 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-05-13 23:58:36.429844 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-05-13 23:58:36.429859 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-05-13 23:58:36.429869 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-05-13 23:58:36.429879 | orchestrator | 2025-05-13 23:58:36.429889 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-05-13 23:58:36.429898 | orchestrator | Tuesday 13 May 2025 23:53:56 +0000 (0:00:03.796) 0:04:50.960 *********** 2025-05-13 23:58:36.429908 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-13 23:58:36.429917 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-13 23:58:36.429927 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-13 23:58:36.429936 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-13 23:58:36.429946 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:58:36.429956 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-13 23:58:36.429965 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:58:36.429975 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-13 23:58:36.429984 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:58:36.429994 | orchestrator | 2025-05-13 23:58:36.430003 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-05-13 23:58:36.430055 | orchestrator | Tuesday 13 May 2025 23:54:00 +0000 (0:00:03.700) 0:04:54.661 *********** 2025-05-13 23:58:36.430068 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:58:36.430077 | orchestrator | 2025-05-13 23:58:36.430087 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-05-13 23:58:36.430096 | orchestrator | Tuesday 13 May 2025 23:54:00 +0000 (0:00:00.122) 0:04:54.783 *********** 2025-05-13 23:58:36.430106 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:58:36.430116 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:58:36.430125 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:58:36.430134 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:58:36.430144 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:58:36.430153 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:58:36.430163 | orchestrator | 2025-05-13 23:58:36.430172 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-05-13 23:58:36.430182 | orchestrator | Tuesday 13 May 2025 23:54:01 +0000 (0:00:00.874) 0:04:55.658 *********** 2025-05-13 23:58:36.430191 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-13 23:58:36.430201 | orchestrator | 2025-05-13 23:58:36.430211 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-05-13 23:58:36.430220 | orchestrator | Tuesday 13 May 2025 23:54:02 +0000 (0:00:00.720) 0:04:56.378 *********** 2025-05-13 23:58:36.430229 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:58:36.430239 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:58:36.430248 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:58:36.430258 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:58:36.430267 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:58:36.430276 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:58:36.430286 | orchestrator | 2025-05-13 23:58:36.430295 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-05-13 23:58:36.430305 | orchestrator | Tuesday 13 May 2025 23:54:02 +0000 (0:00:00.611) 0:04:56.990 *********** 2025-05-13 23:58:36.430316 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-13 23:58:36.430332 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-13 23:58:36.430348 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-13 23:58:36.430365 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-13 23:58:36.430376 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-13 23:58:36.430386 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-13 23:58:36.430466 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-13 23:58:36.430485 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-13 23:58:36.430500 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-13 23:58:36.430519 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-13 23:58:36.430529 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-13 23:58:36.430539 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-13 23:58:36.430549 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-13 23:58:36.430566 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-13 23:58:36.430581 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-13 23:58:36.430597 | orchestrator | 2025-05-13 23:58:36.430607 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-05-13 23:58:36.430617 | orchestrator | Tuesday 13 May 2025 23:54:08 +0000 (0:00:05.287) 0:05:02.278 *********** 2025-05-13 23:58:36.430627 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-13 23:58:36.430637 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-13 23:58:36.430647 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-13 23:58:36.430658 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-13 23:58:36.430674 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-13 23:58:36.430695 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-13 23:58:36.430706 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-13 23:58:36.430716 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-13 23:58:36.430726 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-13 23:58:36.430741 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-13 23:58:36.430762 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-13 23:58:36.430773 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-13 23:58:36.430783 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-13 23:58:36.430793 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-13 23:58:36.430803 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-13 23:58:36.430813 | orchestrator | 2025-05-13 23:58:36.430823 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-05-13 23:58:36.430832 | orchestrator | Tuesday 13 May 2025 23:54:18 +0000 (0:00:10.205) 0:05:12.483 *********** 2025-05-13 23:58:36.430842 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:58:36.430852 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:58:36.430861 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:58:36.430871 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:58:36.430880 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:58:36.430889 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:58:36.430899 | orchestrator | 2025-05-13 23:58:36.430908 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-05-13 23:58:36.430918 | orchestrator | Tuesday 13 May 2025 23:54:21 +0000 (0:00:02.718) 0:05:15.202 *********** 2025-05-13 23:58:36.430933 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-05-13 23:58:36.430942 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-05-13 23:58:36.430952 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-05-13 23:58:36.430962 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-05-13 23:58:36.430976 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-05-13 23:58:36.430987 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-05-13 23:58:36.431005 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-05-13 23:58:36.431021 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:58:36.431037 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-05-13 23:58:36.431052 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:58:36.431067 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-05-13 23:58:36.431083 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:58:36.431099 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-05-13 23:58:36.431122 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-05-13 23:58:36.431140 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-05-13 23:58:36.431159 | orchestrator | 2025-05-13 23:58:36.431176 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-05-13 23:58:36.431194 | orchestrator | Tuesday 13 May 2025 23:54:25 +0000 (0:00:04.498) 0:05:19.701 *********** 2025-05-13 23:58:36.431212 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:58:36.431228 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:58:36.431239 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:58:36.431248 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:58:36.431258 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:58:36.431267 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:58:36.431282 | orchestrator | 2025-05-13 23:58:36.431299 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-05-13 23:58:36.431314 | orchestrator | Tuesday 13 May 2025 23:54:26 +0000 (0:00:01.046) 0:05:20.748 *********** 2025-05-13 23:58:36.431330 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-05-13 23:58:36.431345 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-05-13 23:58:36.431359 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-05-13 23:58:36.431372 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-05-13 23:58:36.431386 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-05-13 23:58:36.431468 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-05-13 23:58:36.431485 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-05-13 23:58:36.431500 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-05-13 23:58:36.431516 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-05-13 23:58:36.431532 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:58:36.431563 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-05-13 23:58:36.431579 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-05-13 23:58:36.431596 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:58:36.431613 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-05-13 23:58:36.431629 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-05-13 23:58:36.431648 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-05-13 23:58:36.431658 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-05-13 23:58:36.431668 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:58:36.431677 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-05-13 23:58:36.431686 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-05-13 23:58:36.431696 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-05-13 23:58:36.431705 | orchestrator | 2025-05-13 23:58:36.431715 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-05-13 23:58:36.431725 | orchestrator | Tuesday 13 May 2025 23:54:33 +0000 (0:00:07.272) 0:05:28.020 *********** 2025-05-13 23:58:36.431734 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-05-13 23:58:36.431744 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-05-13 23:58:36.431763 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-05-13 23:58:36.431773 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-05-13 23:58:36.431782 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-13 23:58:36.431792 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-13 23:58:36.431801 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-05-13 23:58:36.431811 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-05-13 23:58:36.431820 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-13 23:58:36.431829 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-13 23:58:36.431845 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-13 23:58:36.431855 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-13 23:58:36.431864 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-05-13 23:58:36.431874 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:58:36.431883 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-05-13 23:58:36.431892 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:58:36.431902 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-05-13 23:58:36.431912 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:58:36.431921 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-13 23:58:36.431931 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-13 23:58:36.431940 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-13 23:58:36.431960 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-13 23:58:36.431970 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-13 23:58:36.431978 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-13 23:58:36.431986 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-13 23:58:36.431994 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-13 23:58:36.432001 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-13 23:58:36.432009 | orchestrator | 2025-05-13 23:58:36.432016 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-05-13 23:58:36.432024 | orchestrator | Tuesday 13 May 2025 23:54:42 +0000 (0:00:08.181) 0:05:36.202 *********** 2025-05-13 23:58:36.432032 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:58:36.432040 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:58:36.432048 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:58:36.432055 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:58:36.432063 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:58:36.432070 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:58:36.432078 | orchestrator | 2025-05-13 23:58:36.432086 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-05-13 23:58:36.432093 | orchestrator | Tuesday 13 May 2025 23:54:42 +0000 (0:00:00.589) 0:05:36.792 *********** 2025-05-13 23:58:36.432101 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:58:36.432109 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:58:36.432117 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:58:36.432124 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:58:36.432132 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:58:36.432139 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:58:36.432147 | orchestrator | 2025-05-13 23:58:36.432155 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-05-13 23:58:36.432163 | orchestrator | Tuesday 13 May 2025 23:54:43 +0000 (0:00:00.858) 0:05:37.650 *********** 2025-05-13 23:58:36.432171 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:58:36.432178 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:58:36.432186 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:58:36.432193 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:58:36.432201 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:58:36.432208 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:58:36.432216 | orchestrator | 2025-05-13 23:58:36.432224 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-05-13 23:58:36.432231 | orchestrator | Tuesday 13 May 2025 23:54:45 +0000 (0:00:01.897) 0:05:39.548 *********** 2025-05-13 23:58:36.432247 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-13 23:58:36.432260 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-13 23:58:36.432274 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-13 23:58:36.432283 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:58:36.432292 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-13 23:58:36.432300 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-13 23:58:36.432309 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-13 23:58:36.432317 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:58:36.432331 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-13 23:58:36.432349 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-13 23:58:36.432358 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-13 23:58:36.432366 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:58:36.432375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-13 23:58:36.432383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-13 23:58:36.432391 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:58:36.432419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-13 23:58:36.432433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-13 23:58:36.432447 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:58:36.432459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-13 23:58:36.432468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-13 23:58:36.432476 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:58:36.432484 | orchestrator | 2025-05-13 23:58:36.432492 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-05-13 23:58:36.432500 | orchestrator | Tuesday 13 May 2025 23:54:47 +0000 (0:00:01.672) 0:05:41.221 *********** 2025-05-13 23:58:36.432508 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-05-13 23:58:36.432516 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-05-13 23:58:36.432523 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:58:36.432531 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-05-13 23:58:36.432539 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-05-13 23:58:36.432547 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:58:36.432554 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-05-13 23:58:36.432562 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-05-13 23:58:36.432570 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:58:36.432577 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-05-13 23:58:36.432585 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-05-13 23:58:36.432592 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:58:36.432600 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-05-13 23:58:36.432608 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-05-13 23:58:36.432615 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:58:36.432623 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-05-13 23:58:36.432631 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-05-13 23:58:36.432638 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:58:36.432646 | orchestrator | 2025-05-13 23:58:36.432654 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-05-13 23:58:36.432662 | orchestrator | Tuesday 13 May 2025 23:54:47 +0000 (0:00:00.679) 0:05:41.900 *********** 2025-05-13 23:58:36.432670 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-13 23:58:36.432688 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-13 23:58:36.432701 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-13 23:58:36.432709 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-13 23:58:36.432718 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-13 23:58:36.432726 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-13 23:58:36.432776 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-13 23:58:36.432790 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-13 23:58:36.432802 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-13 23:58:36.432811 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-13 23:58:36.432819 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-13 23:58:36.432828 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-13 23:58:36.432836 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-13 23:58:36.432854 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-13 23:58:36.432867 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-13 23:58:36.432875 | orchestrator | 2025-05-13 23:58:36.432883 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-13 23:58:36.432891 | orchestrator | Tuesday 13 May 2025 23:54:51 +0000 (0:00:03.880) 0:05:45.781 *********** 2025-05-13 23:58:36.432899 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:58:36.432906 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:58:36.432914 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:58:36.432922 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:58:36.432929 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:58:36.432937 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:58:36.432945 | orchestrator | 2025-05-13 23:58:36.432952 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-13 23:58:36.432960 | orchestrator | Tuesday 13 May 2025 23:54:52 +0000 (0:00:00.522) 0:05:46.304 *********** 2025-05-13 23:58:36.432968 | orchestrator | 2025-05-13 23:58:36.432976 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-13 23:58:36.432984 | orchestrator | Tuesday 13 May 2025 23:54:52 +0000 (0:00:00.275) 0:05:46.580 *********** 2025-05-13 23:58:36.432991 | orchestrator | 2025-05-13 23:58:36.432999 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-13 23:58:36.433007 | orchestrator | Tuesday 13 May 2025 23:54:52 +0000 (0:00:00.146) 0:05:46.726 *********** 2025-05-13 23:58:36.433015 | orchestrator | 2025-05-13 23:58:36.433022 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-13 23:58:36.433030 | orchestrator | Tuesday 13 May 2025 23:54:52 +0000 (0:00:00.139) 0:05:46.866 *********** 2025-05-13 23:58:36.433038 | orchestrator | 2025-05-13 23:58:36.433045 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-13 23:58:36.433053 | orchestrator | Tuesday 13 May 2025 23:54:52 +0000 (0:00:00.133) 0:05:47.000 *********** 2025-05-13 23:58:36.433065 | orchestrator | 2025-05-13 23:58:36.433073 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-13 23:58:36.433081 | orchestrator | Tuesday 13 May 2025 23:54:52 +0000 (0:00:00.128) 0:05:47.128 *********** 2025-05-13 23:58:36.433089 | orchestrator | 2025-05-13 23:58:36.433096 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-05-13 23:58:36.433104 | orchestrator | Tuesday 13 May 2025 23:54:53 +0000 (0:00:00.149) 0:05:47.277 *********** 2025-05-13 23:58:36.433112 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:58:36.433119 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:58:36.433127 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:58:36.433135 | orchestrator | 2025-05-13 23:58:36.433143 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-05-13 23:58:36.433150 | orchestrator | Tuesday 13 May 2025 23:55:05 +0000 (0:00:12.337) 0:05:59.615 *********** 2025-05-13 23:58:36.433158 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:58:36.433166 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:58:36.433173 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:58:36.433181 | orchestrator | 2025-05-13 23:58:36.433189 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-05-13 23:58:36.433196 | orchestrator | Tuesday 13 May 2025 23:55:20 +0000 (0:00:14.659) 0:06:14.274 *********** 2025-05-13 23:58:36.433204 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:58:36.433212 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:58:36.433220 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:58:36.433227 | orchestrator | 2025-05-13 23:58:36.433235 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-05-13 23:58:36.433243 | orchestrator | Tuesday 13 May 2025 23:55:42 +0000 (0:00:22.527) 0:06:36.801 *********** 2025-05-13 23:58:36.433251 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:58:36.433258 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:58:36.433266 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:58:36.433274 | orchestrator | 2025-05-13 23:58:36.433281 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-05-13 23:58:36.433289 | orchestrator | Tuesday 13 May 2025 23:56:54 +0000 (0:01:11.401) 0:07:48.203 *********** 2025-05-13 23:58:36.433297 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:58:36.433305 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:58:36.433312 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:58:36.433320 | orchestrator | 2025-05-13 23:58:36.433328 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-05-13 23:58:36.433335 | orchestrator | Tuesday 13 May 2025 23:56:55 +0000 (0:00:01.096) 0:07:49.300 *********** 2025-05-13 23:58:36.433343 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:58:36.433351 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:58:36.433359 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:58:36.433367 | orchestrator | 2025-05-13 23:58:36.433375 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-05-13 23:58:36.433387 | orchestrator | Tuesday 13 May 2025 23:56:55 +0000 (0:00:00.832) 0:07:50.133 *********** 2025-05-13 23:58:36.433410 | orchestrator | changed: [testbed-node-4] 2025-05-13 23:58:36.433418 | orchestrator | changed: [testbed-node-5] 2025-05-13 23:58:36.433426 | orchestrator | changed: [testbed-node-3] 2025-05-13 23:58:36.433433 | orchestrator | 2025-05-13 23:58:36.433441 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-05-13 23:58:36.433449 | orchestrator | Tuesday 13 May 2025 23:57:27 +0000 (0:00:31.599) 0:08:21.732 *********** 2025-05-13 23:58:36.433456 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:58:36.433464 | orchestrator | 2025-05-13 23:58:36.433472 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-05-13 23:58:36.433479 | orchestrator | Tuesday 13 May 2025 23:57:27 +0000 (0:00:00.127) 0:08:21.860 *********** 2025-05-13 23:58:36.433487 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:58:36.433495 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:58:36.433507 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:58:36.433515 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:58:36.433522 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:58:36.433537 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-05-13 23:58:36.433545 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-05-13 23:58:36.433553 | orchestrator | 2025-05-13 23:58:36.433560 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-05-13 23:58:36.433568 | orchestrator | Tuesday 13 May 2025 23:57:49 +0000 (0:00:21.820) 0:08:43.681 *********** 2025-05-13 23:58:36.433576 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:58:36.433583 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:58:36.433591 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:58:36.433599 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:58:36.433606 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:58:36.433614 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:58:36.433622 | orchestrator | 2025-05-13 23:58:36.433629 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-05-13 23:58:36.433637 | orchestrator | Tuesday 13 May 2025 23:57:57 +0000 (0:00:08.070) 0:08:51.751 *********** 2025-05-13 23:58:36.433645 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:58:36.433652 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:58:36.433660 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:58:36.433668 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:58:36.433675 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:58:36.433683 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-5 2025-05-13 23:58:36.433691 | orchestrator | 2025-05-13 23:58:36.433699 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-05-13 23:58:36.433706 | orchestrator | Tuesday 13 May 2025 23:58:01 +0000 (0:00:04.348) 0:08:56.100 *********** 2025-05-13 23:58:36.433714 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-05-13 23:58:36.433722 | orchestrator | 2025-05-13 23:58:36.433729 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-05-13 23:58:36.433737 | orchestrator | Tuesday 13 May 2025 23:58:13 +0000 (0:00:11.682) 0:09:07.782 *********** 2025-05-13 23:58:36.433745 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-05-13 23:58:36.433752 | orchestrator | 2025-05-13 23:58:36.433760 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-05-13 23:58:36.433768 | orchestrator | Tuesday 13 May 2025 23:58:14 +0000 (0:00:01.340) 0:09:09.123 *********** 2025-05-13 23:58:36.433775 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:58:36.433783 | orchestrator | 2025-05-13 23:58:36.433791 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-05-13 23:58:36.433798 | orchestrator | Tuesday 13 May 2025 23:58:16 +0000 (0:00:01.336) 0:09:10.460 *********** 2025-05-13 23:58:36.433806 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-05-13 23:58:36.433814 | orchestrator | 2025-05-13 23:58:36.433821 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-05-13 23:58:36.433829 | orchestrator | Tuesday 13 May 2025 23:58:26 +0000 (0:00:10.150) 0:09:20.611 *********** 2025-05-13 23:58:36.433836 | orchestrator | ok: [testbed-node-3] 2025-05-13 23:58:36.433844 | orchestrator | ok: [testbed-node-4] 2025-05-13 23:58:36.433852 | orchestrator | ok: [testbed-node-5] 2025-05-13 23:58:36.433859 | orchestrator | ok: [testbed-node-0] 2025-05-13 23:58:36.433867 | orchestrator | ok: [testbed-node-1] 2025-05-13 23:58:36.433875 | orchestrator | ok: [testbed-node-2] 2025-05-13 23:58:36.433882 | orchestrator | 2025-05-13 23:58:36.433890 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-05-13 23:58:36.433897 | orchestrator | 2025-05-13 23:58:36.433905 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-05-13 23:58:36.433918 | orchestrator | Tuesday 13 May 2025 23:58:28 +0000 (0:00:01.635) 0:09:22.247 *********** 2025-05-13 23:58:36.433926 | orchestrator | changed: [testbed-node-0] 2025-05-13 23:58:36.433934 | orchestrator | changed: [testbed-node-1] 2025-05-13 23:58:36.433941 | orchestrator | changed: [testbed-node-2] 2025-05-13 23:58:36.433949 | orchestrator | 2025-05-13 23:58:36.433957 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-05-13 23:58:36.433964 | orchestrator | 2025-05-13 23:58:36.433972 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-05-13 23:58:36.433979 | orchestrator | Tuesday 13 May 2025 23:58:29 +0000 (0:00:01.173) 0:09:23.420 *********** 2025-05-13 23:58:36.433987 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:58:36.433995 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:58:36.434002 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:58:36.434010 | orchestrator | 2025-05-13 23:58:36.434043 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-05-13 23:58:36.434052 | orchestrator | 2025-05-13 23:58:36.434059 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-05-13 23:58:36.434067 | orchestrator | Tuesday 13 May 2025 23:58:29 +0000 (0:00:00.534) 0:09:23.955 *********** 2025-05-13 23:58:36.434075 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-05-13 23:58:36.434087 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-05-13 23:58:36.434095 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-05-13 23:58:36.434103 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-05-13 23:58:36.434111 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-05-13 23:58:36.434118 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-05-13 23:58:36.434126 | orchestrator | skipping: [testbed-node-3] 2025-05-13 23:58:36.434134 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-05-13 23:58:36.434141 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-05-13 23:58:36.434149 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-05-13 23:58:36.434157 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-05-13 23:58:36.434164 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-05-13 23:58:36.434172 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-05-13 23:58:36.434179 | orchestrator | skipping: [testbed-node-4] 2025-05-13 23:58:36.434191 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-05-13 23:58:36.434199 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-05-13 23:58:36.434207 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-05-13 23:58:36.434214 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-05-13 23:58:36.434222 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-05-13 23:58:36.434230 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-05-13 23:58:36.434237 | orchestrator | skipping: [testbed-node-5] 2025-05-13 23:58:36.434245 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-05-13 23:58:36.434253 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-05-13 23:58:36.434260 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-05-13 23:58:36.434268 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-05-13 23:58:36.434276 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-05-13 23:58:36.434283 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-05-13 23:58:36.434291 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:58:36.434298 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-05-13 23:58:36.434306 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-05-13 23:58:36.434314 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-05-13 23:58:36.434327 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-05-13 23:58:36.434335 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-05-13 23:58:36.434343 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-05-13 23:58:36.434350 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:58:36.434358 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-05-13 23:58:36.434366 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-05-13 23:58:36.434373 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-05-13 23:58:36.434381 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-05-13 23:58:36.434389 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-05-13 23:58:36.434439 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-05-13 23:58:36.434448 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:58:36.434455 | orchestrator | 2025-05-13 23:58:36.434463 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-05-13 23:58:36.434471 | orchestrator | 2025-05-13 23:58:36.434479 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-05-13 23:58:36.434487 | orchestrator | Tuesday 13 May 2025 23:58:31 +0000 (0:00:01.298) 0:09:25.253 *********** 2025-05-13 23:58:36.434494 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-05-13 23:58:36.434502 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-05-13 23:58:36.434510 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:58:36.434518 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-05-13 23:58:36.434525 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-05-13 23:58:36.434533 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:58:36.434541 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-05-13 23:58:36.434548 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-05-13 23:58:36.434556 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:58:36.434564 | orchestrator | 2025-05-13 23:58:36.434571 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-05-13 23:58:36.434579 | orchestrator | 2025-05-13 23:58:36.434587 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-05-13 23:58:36.434594 | orchestrator | Tuesday 13 May 2025 23:58:31 +0000 (0:00:00.732) 0:09:25.986 *********** 2025-05-13 23:58:36.434602 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:58:36.434610 | orchestrator | 2025-05-13 23:58:36.434618 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-05-13 23:58:36.434625 | orchestrator | 2025-05-13 23:58:36.434633 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-05-13 23:58:36.434641 | orchestrator | Tuesday 13 May 2025 23:58:32 +0000 (0:00:00.680) 0:09:26.667 *********** 2025-05-13 23:58:36.434648 | orchestrator | skipping: [testbed-node-0] 2025-05-13 23:58:36.434656 | orchestrator | skipping: [testbed-node-1] 2025-05-13 23:58:36.434664 | orchestrator | skipping: [testbed-node-2] 2025-05-13 23:58:36.434671 | orchestrator | 2025-05-13 23:58:36.434679 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 23:58:36.434687 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 23:58:36.434700 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-05-13 23:58:36.434708 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-05-13 23:58:36.434716 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-05-13 23:58:36.434730 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-05-13 23:58:36.434742 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-05-13 23:58:36.434750 | orchestrator | testbed-node-5 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2025-05-13 23:58:36.434757 | orchestrator | 2025-05-13 23:58:36.434765 | orchestrator | 2025-05-13 23:58:36.434773 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 23:58:36.434781 | orchestrator | Tuesday 13 May 2025 23:58:32 +0000 (0:00:00.459) 0:09:27.126 *********** 2025-05-13 23:58:36.434788 | orchestrator | =============================================================================== 2025-05-13 23:58:36.434796 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 71.40s 2025-05-13 23:58:36.434804 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 31.60s 2025-05-13 23:58:36.434812 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 29.10s 2025-05-13 23:58:36.434819 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 28.11s 2025-05-13 23:58:36.434827 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 22.53s 2025-05-13 23:58:36.434834 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 21.82s 2025-05-13 23:58:36.434842 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 19.76s 2025-05-13 23:58:36.434850 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 17.66s 2025-05-13 23:58:36.434857 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 14.66s 2025-05-13 23:58:36.434865 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 13.46s 2025-05-13 23:58:36.434873 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 12.34s 2025-05-13 23:58:36.434880 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.97s 2025-05-13 23:58:36.434888 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.68s 2025-05-13 23:58:36.434896 | orchestrator | nova-cell : Create cell ------------------------------------------------ 10.96s 2025-05-13 23:58:36.434903 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 10.24s 2025-05-13 23:58:36.434911 | orchestrator | nova-cell : Copying over nova.conf ------------------------------------- 10.21s 2025-05-13 23:58:36.434919 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 10.15s 2025-05-13 23:58:36.434926 | orchestrator | nova : Copying over nova.conf ------------------------------------------ 10.01s 2025-05-13 23:58:36.434934 | orchestrator | nova-cell : Copying files for nova-ssh ---------------------------------- 8.18s 2025-05-13 23:58:36.434941 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 8.07s 2025-05-13 23:58:36.434949 | orchestrator | 2025-05-13 23:58:36 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:58:39.473106 | orchestrator | 2025-05-13 23:58:39 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-13 23:58:39.473206 | orchestrator | 2025-05-13 23:58:39 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:58:42.530083 | orchestrator | 2025-05-13 23:58:42 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-13 23:58:42.530200 | orchestrator | 2025-05-13 23:58:42 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:58:45.582118 | orchestrator | 2025-05-13 23:58:45 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-13 23:58:45.582214 | orchestrator | 2025-05-13 23:58:45 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:58:48.628515 | orchestrator | 2025-05-13 23:58:48 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-13 23:58:48.628625 | orchestrator | 2025-05-13 23:58:48 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:58:51.671297 | orchestrator | 2025-05-13 23:58:51 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-13 23:58:51.671449 | orchestrator | 2025-05-13 23:58:51 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:58:54.720170 | orchestrator | 2025-05-13 23:58:54 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-13 23:58:54.720263 | orchestrator | 2025-05-13 23:58:54 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:58:57.758956 | orchestrator | 2025-05-13 23:58:57 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-13 23:58:57.759056 | orchestrator | 2025-05-13 23:58:57 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:59:00.808482 | orchestrator | 2025-05-13 23:59:00 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-13 23:59:00.808603 | orchestrator | 2025-05-13 23:59:00 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:59:03.855688 | orchestrator | 2025-05-13 23:59:03 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-13 23:59:03.855789 | orchestrator | 2025-05-13 23:59:03 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:59:06.893068 | orchestrator | 2025-05-13 23:59:06 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-13 23:59:06.893181 | orchestrator | 2025-05-13 23:59:06 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:59:09.937600 | orchestrator | 2025-05-13 23:59:09 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-13 23:59:09.937706 | orchestrator | 2025-05-13 23:59:09 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:59:12.995239 | orchestrator | 2025-05-13 23:59:12 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-13 23:59:12.995516 | orchestrator | 2025-05-13 23:59:12 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:59:16.046415 | orchestrator | 2025-05-13 23:59:16 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-13 23:59:16.047050 | orchestrator | 2025-05-13 23:59:16 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:59:19.097699 | orchestrator | 2025-05-13 23:59:19 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-13 23:59:19.097800 | orchestrator | 2025-05-13 23:59:19 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:59:22.150297 | orchestrator | 2025-05-13 23:59:22 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-13 23:59:22.150470 | orchestrator | 2025-05-13 23:59:22 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:59:25.211000 | orchestrator | 2025-05-13 23:59:25 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-13 23:59:25.211107 | orchestrator | 2025-05-13 23:59:25 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:59:28.254696 | orchestrator | 2025-05-13 23:59:28 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-13 23:59:28.254799 | orchestrator | 2025-05-13 23:59:28 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:59:31.300512 | orchestrator | 2025-05-13 23:59:31 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-13 23:59:31.300621 | orchestrator | 2025-05-13 23:59:31 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:59:34.354753 | orchestrator | 2025-05-13 23:59:34 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-13 23:59:34.354854 | orchestrator | 2025-05-13 23:59:34 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:59:37.400183 | orchestrator | 2025-05-13 23:59:37 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-13 23:59:37.400279 | orchestrator | 2025-05-13 23:59:37 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:59:40.453084 | orchestrator | 2025-05-13 23:59:40 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-13 23:59:40.453169 | orchestrator | 2025-05-13 23:59:40 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:59:43.503448 | orchestrator | 2025-05-13 23:59:43 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-13 23:59:43.503559 | orchestrator | 2025-05-13 23:59:43 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:59:46.545954 | orchestrator | 2025-05-13 23:59:46 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-13 23:59:46.546123 | orchestrator | 2025-05-13 23:59:46 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:59:49.596143 | orchestrator | 2025-05-13 23:59:49 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-13 23:59:49.596278 | orchestrator | 2025-05-13 23:59:49 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:59:52.636649 | orchestrator | 2025-05-13 23:59:52 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-13 23:59:52.636736 | orchestrator | 2025-05-13 23:59:52 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:59:55.678240 | orchestrator | 2025-05-13 23:59:55 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-13 23:59:55.678370 | orchestrator | 2025-05-13 23:59:55 | INFO  | Wait 1 second(s) until the next check 2025-05-13 23:59:58.730328 | orchestrator | 2025-05-13 23:59:58 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-13 23:59:58.730404 | orchestrator | 2025-05-13 23:59:58 | INFO  | Wait 1 second(s) until the next check 2025-05-14 00:00:01.777131 | orchestrator | 2025-05-14 00:00:01 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-14 00:00:01.777244 | orchestrator | 2025-05-14 00:00:01 | INFO  | Wait 1 second(s) until the next check 2025-05-14 00:00:04.833900 | orchestrator | 2025-05-14 00:00:04 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-14 00:00:04.834006 | orchestrator | 2025-05-14 00:00:04 | INFO  | Wait 1 second(s) until the next check 2025-05-14 00:00:07.891174 | orchestrator | 2025-05-14 00:00:07 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-14 00:00:07.891385 | orchestrator | 2025-05-14 00:00:07 | INFO  | Wait 1 second(s) until the next check 2025-05-14 00:00:10.941011 | orchestrator | 2025-05-14 00:00:10 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-14 00:00:10.941100 | orchestrator | 2025-05-14 00:00:10 | INFO  | Wait 1 second(s) until the next check 2025-05-14 00:00:13.989954 | orchestrator | 2025-05-14 00:00:13 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-14 00:00:13.990119 | orchestrator | 2025-05-14 00:00:13 | INFO  | Wait 1 second(s) until the next check 2025-05-14 00:00:17.041475 | orchestrator | 2025-05-14 00:00:17 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-14 00:00:17.041581 | orchestrator | 2025-05-14 00:00:17 | INFO  | Wait 1 second(s) until the next check 2025-05-14 00:00:20.092536 | orchestrator | 2025-05-14 00:00:20 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-14 00:00:20.092643 | orchestrator | 2025-05-14 00:00:20 | INFO  | Wait 1 second(s) until the next check 2025-05-14 00:00:23.139564 | orchestrator | 2025-05-14 00:00:23 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-14 00:00:23.139663 | orchestrator | 2025-05-14 00:00:23 | INFO  | Wait 1 second(s) until the next check 2025-05-14 00:00:26.178920 | orchestrator | 2025-05-14 00:00:26 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-14 00:00:26.179023 | orchestrator | 2025-05-14 00:00:26 | INFO  | Wait 1 second(s) until the next check 2025-05-14 00:00:29.223466 | orchestrator | 2025-05-14 00:00:29 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-14 00:00:29.223570 | orchestrator | 2025-05-14 00:00:29 | INFO  | Wait 1 second(s) until the next check 2025-05-14 00:00:32.269626 | orchestrator | 2025-05-14 00:00:32 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-14 00:00:32.269757 | orchestrator | 2025-05-14 00:00:32 | INFO  | Wait 1 second(s) until the next check 2025-05-14 00:00:35.319376 | orchestrator | 2025-05-14 00:00:35 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-14 00:00:35.319506 | orchestrator | 2025-05-14 00:00:35 | INFO  | Wait 1 second(s) until the next check 2025-05-14 00:00:38.371546 | orchestrator | 2025-05-14 00:00:38 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-14 00:00:38.371653 | orchestrator | 2025-05-14 00:00:38 | INFO  | Wait 1 second(s) until the next check 2025-05-14 00:00:41.412441 | orchestrator | 2025-05-14 00:00:41 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-14 00:00:41.412533 | orchestrator | 2025-05-14 00:00:41 | INFO  | Wait 1 second(s) until the next check 2025-05-14 00:00:44.471950 | orchestrator | 2025-05-14 00:00:44 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-14 00:00:44.472022 | orchestrator | 2025-05-14 00:00:44 | INFO  | Wait 1 second(s) until the next check 2025-05-14 00:00:47.521403 | orchestrator | 2025-05-14 00:00:47 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-14 00:00:47.521507 | orchestrator | 2025-05-14 00:00:47 | INFO  | Wait 1 second(s) until the next check 2025-05-14 00:00:50.572783 | orchestrator | 2025-05-14 00:00:50 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-14 00:00:50.572985 | orchestrator | 2025-05-14 00:00:50 | INFO  | Wait 1 second(s) until the next check 2025-05-14 00:00:53.622481 | orchestrator | 2025-05-14 00:00:53 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-14 00:00:53.622582 | orchestrator | 2025-05-14 00:00:53 | INFO  | Wait 1 second(s) until the next check 2025-05-14 00:00:56.669131 | orchestrator | 2025-05-14 00:00:56 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-14 00:00:56.669295 | orchestrator | 2025-05-14 00:00:56 | INFO  | Wait 1 second(s) until the next check 2025-05-14 00:00:59.726488 | orchestrator | 2025-05-14 00:00:59 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-14 00:00:59.726594 | orchestrator | 2025-05-14 00:00:59 | INFO  | Wait 1 second(s) until the next check 2025-05-14 00:01:02.788060 | orchestrator | 2025-05-14 00:01:02 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-14 00:01:02.788160 | orchestrator | 2025-05-14 00:01:02 | INFO  | Wait 1 second(s) until the next check 2025-05-14 00:01:05.831625 | orchestrator | 2025-05-14 00:01:05 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-14 00:01:05.831726 | orchestrator | 2025-05-14 00:01:05 | INFO  | Wait 1 second(s) until the next check 2025-05-14 00:01:08.895819 | orchestrator | 2025-05-14 00:01:08 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-14 00:01:08.895912 | orchestrator | 2025-05-14 00:01:08 | INFO  | Wait 1 second(s) until the next check 2025-05-14 00:01:11.948492 | orchestrator | 2025-05-14 00:01:11 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-14 00:01:11.948592 | orchestrator | 2025-05-14 00:01:11 | INFO  | Wait 1 second(s) until the next check 2025-05-14 00:01:15.000884 | orchestrator | 2025-05-14 00:01:14 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state STARTED 2025-05-14 00:01:15.000944 | orchestrator | 2025-05-14 00:01:14 | INFO  | Wait 1 second(s) until the next check 2025-05-14 00:01:18.061042 | orchestrator | 2025-05-14 00:01:18.061197 | orchestrator | 2025-05-14 00:01:18.061422 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 00:01:18.061436 | orchestrator | 2025-05-14 00:01:18.061447 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-14 00:01:18.061459 | orchestrator | Tuesday 13 May 2025 23:56:27 +0000 (0:00:00.272) 0:00:00.272 *********** 2025-05-14 00:01:18.061472 | orchestrator | ok: [testbed-node-0] 2025-05-14 00:01:18.061488 | orchestrator | ok: [testbed-node-1] 2025-05-14 00:01:18.061501 | orchestrator | ok: [testbed-node-2] 2025-05-14 00:01:18.061514 | orchestrator | 2025-05-14 00:01:18.061526 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-14 00:01:18.061538 | orchestrator | Tuesday 13 May 2025 23:56:27 +0000 (0:00:00.322) 0:00:00.594 *********** 2025-05-14 00:01:18.061552 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-05-14 00:01:18.061565 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-05-14 00:01:18.061578 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-05-14 00:01:18.061591 | orchestrator | 2025-05-14 00:01:18.061603 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-05-14 00:01:18.061617 | orchestrator | 2025-05-14 00:01:18.061629 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-05-14 00:01:18.061642 | orchestrator | Tuesday 13 May 2025 23:56:27 +0000 (0:00:00.469) 0:00:01.064 *********** 2025-05-14 00:01:18.061655 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 00:01:18.061668 | orchestrator | 2025-05-14 00:01:18.061681 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-05-14 00:01:18.061694 | orchestrator | Tuesday 13 May 2025 23:56:28 +0000 (0:00:00.646) 0:00:01.711 *********** 2025-05-14 00:01:18.061708 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-05-14 00:01:18.061721 | orchestrator | 2025-05-14 00:01:18.061734 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-05-14 00:01:18.061747 | orchestrator | Tuesday 13 May 2025 23:56:31 +0000 (0:00:03.393) 0:00:05.104 *********** 2025-05-14 00:01:18.061759 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-05-14 00:01:18.061773 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-05-14 00:01:18.061786 | orchestrator | 2025-05-14 00:01:18.061799 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-05-14 00:01:18.061812 | orchestrator | Tuesday 13 May 2025 23:56:37 +0000 (0:00:06.064) 0:00:11.169 *********** 2025-05-14 00:01:18.061826 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-14 00:01:18.061838 | orchestrator | 2025-05-14 00:01:18.061851 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-05-14 00:01:18.061894 | orchestrator | Tuesday 13 May 2025 23:56:40 +0000 (0:00:03.015) 0:00:14.185 *********** 2025-05-14 00:01:18.061905 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-14 00:01:18.061916 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-05-14 00:01:18.061927 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-05-14 00:01:18.061937 | orchestrator | 2025-05-14 00:01:18.061948 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-05-14 00:01:18.061958 | orchestrator | Tuesday 13 May 2025 23:56:48 +0000 (0:00:07.733) 0:00:21.919 *********** 2025-05-14 00:01:18.061969 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-14 00:01:18.061980 | orchestrator | 2025-05-14 00:01:18.061991 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-05-14 00:01:18.062002 | orchestrator | Tuesday 13 May 2025 23:56:51 +0000 (0:00:03.301) 0:00:25.220 *********** 2025-05-14 00:01:18.062012 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-05-14 00:01:18.062076 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-05-14 00:01:18.062088 | orchestrator | 2025-05-14 00:01:18.062099 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-05-14 00:01:18.062109 | orchestrator | Tuesday 13 May 2025 23:56:59 +0000 (0:00:07.530) 0:00:32.751 *********** 2025-05-14 00:01:18.062120 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-05-14 00:01:18.062144 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-05-14 00:01:18.062155 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-05-14 00:01:18.062165 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-05-14 00:01:18.062176 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-05-14 00:01:18.062186 | orchestrator | 2025-05-14 00:01:18.062197 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-05-14 00:01:18.062240 | orchestrator | Tuesday 13 May 2025 23:57:14 +0000 (0:00:15.411) 0:00:48.162 *********** 2025-05-14 00:01:18.062251 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 00:01:18.062262 | orchestrator | 2025-05-14 00:01:18.062272 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-05-14 00:01:18.062283 | orchestrator | Tuesday 13 May 2025 23:57:15 +0000 (0:00:00.582) 0:00:48.744 *********** 2025-05-14 00:01:18.062294 | orchestrator | changed: [testbed-node-0] 2025-05-14 00:01:18.062305 | orchestrator | 2025-05-14 00:01:18.062316 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2025-05-14 00:01:18.062326 | orchestrator | Tuesday 13 May 2025 23:57:20 +0000 (0:00:05.162) 0:00:53.907 *********** 2025-05-14 00:01:18.062337 | orchestrator | changed: [testbed-node-0] 2025-05-14 00:01:18.062348 | orchestrator | 2025-05-14 00:01:18.062358 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-05-14 00:01:18.062393 | orchestrator | Tuesday 13 May 2025 23:57:25 +0000 (0:00:04.455) 0:00:58.362 *********** 2025-05-14 00:01:18.062405 | orchestrator | ok: [testbed-node-0] 2025-05-14 00:01:18.062417 | orchestrator | 2025-05-14 00:01:18.062427 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2025-05-14 00:01:18.062438 | orchestrator | Tuesday 13 May 2025 23:57:28 +0000 (0:00:03.178) 0:01:01.541 *********** 2025-05-14 00:01:18.062449 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-05-14 00:01:18.062460 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-05-14 00:01:18.062471 | orchestrator | 2025-05-14 00:01:18.062481 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2025-05-14 00:01:18.062492 | orchestrator | Tuesday 13 May 2025 23:57:38 +0000 (0:00:10.126) 0:01:11.667 *********** 2025-05-14 00:01:18.062503 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2025-05-14 00:01:18.062523 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2025-05-14 00:01:18.062536 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2025-05-14 00:01:18.062548 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2025-05-14 00:01:18.062559 | orchestrator | 2025-05-14 00:01:18.062569 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2025-05-14 00:01:18.062580 | orchestrator | Tuesday 13 May 2025 23:57:54 +0000 (0:00:15.599) 0:01:27.267 *********** 2025-05-14 00:01:18.062591 | orchestrator | changed: [testbed-node-0] 2025-05-14 00:01:18.062601 | orchestrator | 2025-05-14 00:01:18.062612 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2025-05-14 00:01:18.062623 | orchestrator | Tuesday 13 May 2025 23:57:59 +0000 (0:00:05.226) 0:01:32.493 *********** 2025-05-14 00:01:18.062633 | orchestrator | changed: [testbed-node-0] 2025-05-14 00:01:18.062644 | orchestrator | 2025-05-14 00:01:18.062655 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2025-05-14 00:01:18.062665 | orchestrator | Tuesday 13 May 2025 23:58:04 +0000 (0:00:05.445) 0:01:37.938 *********** 2025-05-14 00:01:18.062676 | orchestrator | skipping: [testbed-node-0] 2025-05-14 00:01:18.062687 | orchestrator | 2025-05-14 00:01:18.062698 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2025-05-14 00:01:18.062709 | orchestrator | Tuesday 13 May 2025 23:58:04 +0000 (0:00:00.196) 0:01:38.135 *********** 2025-05-14 00:01:18.062719 | orchestrator | changed: [testbed-node-0] 2025-05-14 00:01:18.062730 | orchestrator | 2025-05-14 00:01:18.062741 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-05-14 00:01:18.062751 | orchestrator | Tuesday 13 May 2025 23:58:09 +0000 (0:00:04.384) 0:01:42.519 *********** 2025-05-14 00:01:18.062762 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 00:01:18.062773 | orchestrator | 2025-05-14 00:01:18.062783 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2025-05-14 00:01:18.062794 | orchestrator | Tuesday 13 May 2025 23:58:10 +0000 (0:00:01.247) 0:01:43.767 *********** 2025-05-14 00:01:18.062805 | orchestrator | changed: [testbed-node-0] 2025-05-14 00:01:18.062816 | orchestrator | changed: [testbed-node-1] 2025-05-14 00:01:18.062827 | orchestrator | changed: [testbed-node-2] 2025-05-14 00:01:18.062838 | orchestrator | 2025-05-14 00:01:18.062848 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2025-05-14 00:01:18.062859 | orchestrator | Tuesday 13 May 2025 23:58:16 +0000 (0:00:05.604) 0:01:49.371 *********** 2025-05-14 00:01:18.062870 | orchestrator | changed: [testbed-node-0] 2025-05-14 00:01:18.062880 | orchestrator | changed: [testbed-node-2] 2025-05-14 00:01:18.062891 | orchestrator | changed: [testbed-node-1] 2025-05-14 00:01:18.062902 | orchestrator | 2025-05-14 00:01:18.062912 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2025-05-14 00:01:18.062923 | orchestrator | Tuesday 13 May 2025 23:58:21 +0000 (0:00:05.576) 0:01:54.947 *********** 2025-05-14 00:01:18.062934 | orchestrator | changed: [testbed-node-0] 2025-05-14 00:01:18.062945 | orchestrator | changed: [testbed-node-1] 2025-05-14 00:01:18.062955 | orchestrator | changed: [testbed-node-2] 2025-05-14 00:01:18.062966 | orchestrator | 2025-05-14 00:01:18.062982 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2025-05-14 00:01:18.062993 | orchestrator | Tuesday 13 May 2025 23:58:22 +0000 (0:00:00.851) 0:01:55.799 *********** 2025-05-14 00:01:18.063004 | orchestrator | ok: [testbed-node-0] 2025-05-14 00:01:18.063019 | orchestrator | ok: [testbed-node-1] 2025-05-14 00:01:18.063030 | orchestrator | ok: [testbed-node-2] 2025-05-14 00:01:18.063041 | orchestrator | 2025-05-14 00:01:18.063052 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2025-05-14 00:01:18.063069 | orchestrator | Tuesday 13 May 2025 23:58:24 +0000 (0:00:02.139) 0:01:57.938 *********** 2025-05-14 00:01:18.063080 | orchestrator | changed: [testbed-node-2] 2025-05-14 00:01:18.063090 | orchestrator | changed: [testbed-node-0] 2025-05-14 00:01:18.063101 | orchestrator | changed: [testbed-node-1] 2025-05-14 00:01:18.063111 | orchestrator | 2025-05-14 00:01:18.063122 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2025-05-14 00:01:18.063133 | orchestrator | Tuesday 13 May 2025 23:58:25 +0000 (0:00:01.147) 0:01:59.085 *********** 2025-05-14 00:01:18.063144 | orchestrator | changed: [testbed-node-0] 2025-05-14 00:01:18.063155 | orchestrator | changed: [testbed-node-1] 2025-05-14 00:01:18.063165 | orchestrator | changed: [testbed-node-2] 2025-05-14 00:01:18.063176 | orchestrator | 2025-05-14 00:01:18.063187 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2025-05-14 00:01:18.063197 | orchestrator | Tuesday 13 May 2025 23:58:27 +0000 (0:00:01.217) 0:02:00.303 *********** 2025-05-14 00:01:18.063224 | orchestrator | changed: [testbed-node-0] 2025-05-14 00:01:18.063235 | orchestrator | changed: [testbed-node-1] 2025-05-14 00:01:18.063246 | orchestrator | changed: [testbed-node-2] 2025-05-14 00:01:18.063257 | orchestrator | 2025-05-14 00:01:18.063276 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2025-05-14 00:01:18.063287 | orchestrator | Tuesday 13 May 2025 23:58:29 +0000 (0:00:01.999) 0:02:02.302 *********** 2025-05-14 00:01:18.063298 | orchestrator | changed: [testbed-node-0] 2025-05-14 00:01:18.063309 | orchestrator | changed: [testbed-node-1] 2025-05-14 00:01:18.063323 | orchestrator | changed: [testbed-node-2] 2025-05-14 00:01:18.063344 | orchestrator | 2025-05-14 00:01:18.063365 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2025-05-14 00:01:18.063384 | orchestrator | Tuesday 13 May 2025 23:58:30 +0000 (0:00:01.950) 0:02:04.253 *********** 2025-05-14 00:01:18.063402 | orchestrator | ok: [testbed-node-0] 2025-05-14 00:01:18.063424 | orchestrator | ok: [testbed-node-1] 2025-05-14 00:01:18.063445 | orchestrator | ok: [testbed-node-2] 2025-05-14 00:01:18.063465 | orchestrator | 2025-05-14 00:01:18.063482 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2025-05-14 00:01:18.063493 | orchestrator | Tuesday 13 May 2025 23:58:31 +0000 (0:00:00.619) 0:02:04.872 *********** 2025-05-14 00:01:18.063504 | orchestrator | ok: [testbed-node-0] 2025-05-14 00:01:18.063515 | orchestrator | ok: [testbed-node-1] 2025-05-14 00:01:18.063526 | orchestrator | ok: [testbed-node-2] 2025-05-14 00:01:18.063537 | orchestrator | 2025-05-14 00:01:18.063548 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-05-14 00:01:18.063559 | orchestrator | Tuesday 13 May 2025 23:58:34 +0000 (0:00:02.893) 0:02:07.766 *********** 2025-05-14 00:01:18.063570 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 00:01:18.063580 | orchestrator | 2025-05-14 00:01:18.063591 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2025-05-14 00:01:18.063602 | orchestrator | Tuesday 13 May 2025 23:58:35 +0000 (0:00:00.702) 0:02:08.468 *********** 2025-05-14 00:01:18.063612 | orchestrator | ok: [testbed-node-0] 2025-05-14 00:01:18.063623 | orchestrator | 2025-05-14 00:01:18.063634 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-05-14 00:01:18.063644 | orchestrator | Tuesday 13 May 2025 23:58:38 +0000 (0:00:03.315) 0:02:11.784 *********** 2025-05-14 00:01:18.063655 | orchestrator | ok: [testbed-node-0] 2025-05-14 00:01:18.063666 | orchestrator | 2025-05-14 00:01:18.063676 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2025-05-14 00:01:18.063687 | orchestrator | Tuesday 13 May 2025 23:58:41 +0000 (0:00:03.098) 0:02:14.882 *********** 2025-05-14 00:01:18.063698 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-05-14 00:01:18.063709 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-05-14 00:01:18.063719 | orchestrator | 2025-05-14 00:01:18.063730 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2025-05-14 00:01:18.063751 | orchestrator | Tuesday 13 May 2025 23:58:48 +0000 (0:00:06.393) 0:02:21.275 *********** 2025-05-14 00:01:18.063762 | orchestrator | ok: [testbed-node-0] 2025-05-14 00:01:18.063773 | orchestrator | 2025-05-14 00:01:18.063784 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2025-05-14 00:01:18.063795 | orchestrator | Tuesday 13 May 2025 23:58:51 +0000 (0:00:03.204) 0:02:24.480 *********** 2025-05-14 00:01:18.063805 | orchestrator | ok: [testbed-node-0] 2025-05-14 00:01:18.063816 | orchestrator | ok: [testbed-node-1] 2025-05-14 00:01:18.063827 | orchestrator | ok: [testbed-node-2] 2025-05-14 00:01:18.063837 | orchestrator | 2025-05-14 00:01:18.063848 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2025-05-14 00:01:18.063859 | orchestrator | Tuesday 13 May 2025 23:58:51 +0000 (0:00:00.358) 0:02:24.839 *********** 2025-05-14 00:01:18.063880 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-14 00:01:18.063905 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-14 00:01:18.063918 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-14 00:01:18.063931 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-14 00:01:18.063950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-14 00:01:18.063962 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-14 00:01:18.063980 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-14 00:01:18.063992 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-14 00:01:18.064013 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-14 00:01:18.064025 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-14 00:01:18.064036 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-14 00:01:18.064055 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-14 00:01:18.064067 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-14 00:01:18.064084 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-14 00:01:18.064096 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-14 00:01:18.064108 | orchestrator | 2025-05-14 00:01:18.064120 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2025-05-14 00:01:18.064131 | orchestrator | Tuesday 13 May 2025 23:58:54 +0000 (0:00:02.716) 0:02:27.556 *********** 2025-05-14 00:01:18.064142 | orchestrator | skipping: [testbed-node-0] 2025-05-14 00:01:18.064153 | orchestrator | 2025-05-14 00:01:18.064170 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2025-05-14 00:01:18.064181 | orchestrator | Tuesday 13 May 2025 23:58:54 +0000 (0:00:00.323) 0:02:27.879 *********** 2025-05-14 00:01:18.064193 | orchestrator | skipping: [testbed-node-0] 2025-05-14 00:01:18.064553 | orchestrator | skipping: [testbed-node-1] 2025-05-14 00:01:18.064619 | orchestrator | skipping: [testbed-node-2] 2025-05-14 00:01:18.064631 | orchestrator | 2025-05-14 00:01:18.064642 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2025-05-14 00:01:18.064664 | orchestrator | Tuesday 13 May 2025 23:58:54 +0000 (0:00:00.319) 0:02:28.198 *********** 2025-05-14 00:01:18.064677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-14 00:01:18.064712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-14 00:01:18.064746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-14 00:01:18.064759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-14 00:01:18.064771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-14 00:01:18.064783 | orchestrator | skipping: [testbed-node-0] 2025-05-14 00:01:18.064820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-14 00:01:18.064840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-14 00:01:18.064976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-14 00:01:18.064998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-14 00:01:18.065010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-14 00:01:18.065021 | orchestrator | skipping: [testbed-node-1] 2025-05-14 00:01:18.065038 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-14 00:01:18.065061 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-14 00:01:18.065081 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-14 00:01:18.065093 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-14 00:01:18.065105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-14 00:01:18.065116 | orchestrator | skipping: [testbed-node-2] 2025-05-14 00:01:18.065127 | orchestrator | 2025-05-14 00:01:18.065138 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-05-14 00:01:18.065149 | orchestrator | Tuesday 13 May 2025 23:58:55 +0000 (0:00:00.779) 0:02:28.978 *********** 2025-05-14 00:01:18.065161 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 00:01:18.065172 | orchestrator | 2025-05-14 00:01:18.065183 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2025-05-14 00:01:18.065194 | orchestrator | Tuesday 13 May 2025 23:58:56 +0000 (0:00:00.563) 0:02:29.542 *********** 2025-05-14 00:01:18.065239 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-14 00:01:18.065259 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 2025-05-14 00:01:18 | INFO  | Task f6e81fa1-d417-4874-bad6-e772623aa49e is in state SUCCESS 2025-05-14 00:01:18.066010 | orchestrator | 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-14 00:01:18.066104 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-14 00:01:18.066118 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-14 00:01:18.066130 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-14 00:01:18.066142 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-14 00:01:18.066659 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-14 00:01:18.066682 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-14 00:01:18.066776 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-14 00:01:18.066791 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-14 00:01:18.066803 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-14 00:01:18.066815 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-14 00:01:18.066832 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-14 00:01:18.066844 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-14 00:01:18.066869 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-14 00:01:18.066881 | orchestrator | 2025-05-14 00:01:18.066893 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2025-05-14 00:01:18.066904 | orchestrator | Tuesday 13 May 2025 23:59:01 +0000 (0:00:05.329) 0:02:34.872 *********** 2025-05-14 00:01:18.066916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-14 00:01:18.066927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-14 00:01:18.066939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-14 00:01:18.066950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-14 00:01:18.066967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-14 00:01:18.066985 | orchestrator | skipping: [testbed-node-0] 2025-05-14 00:01:18.067005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-14 00:01:18.067018 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-14 00:01:18.067035 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-14 00:01:18.067059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-14 00:01:18.067088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-14 00:01:18.067106 | orchestrator | skipping: [testbed-node-1] 2025-05-14 00:01:18.067133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-14 00:01:18.067173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-14 00:01:18.067195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-14 00:01:18.067243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-14 00:01:18.067264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-14 00:01:18.067278 | orchestrator | skipping: [testbed-node-2] 2025-05-14 00:01:18.067291 | orchestrator | 2025-05-14 00:01:18.067304 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2025-05-14 00:01:18.067317 | orchestrator | Tuesday 13 May 2025 23:59:02 +0000 (0:00:00.677) 0:02:35.549 *********** 2025-05-14 00:01:18.067337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-14 00:01:18.067363 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-14 00:01:18.067377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-14 00:01:18.067399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-14 00:01:18.067411 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-14 00:01:18.067422 | orchestrator | skipping: [testbed-node-0] 2025-05-14 00:01:18.067434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-14 00:01:18.067445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-14 00:01:18.067468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-14 00:01:18.067480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-14 00:01:18.067500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-14 00:01:18.067512 | orchestrator | skipping: [testbed-node-1] 2025-05-14 00:01:18.067525 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-14 00:01:18.067546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-14 00:01:18.067564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-14 00:01:18.067601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-14 00:01:18.067622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-14 00:01:18.067641 | orchestrator | skipping: [testbed-node-2] 2025-05-14 00:01:18.067657 | orchestrator | 2025-05-14 00:01:18.067668 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2025-05-14 00:01:18.067679 | orchestrator | Tuesday 13 May 2025 23:59:03 +0000 (0:00:00.804) 0:02:36.354 *********** 2025-05-14 00:01:18.067700 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-14 00:01:18.067712 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-14 00:01:18.067724 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-14 00:01:18.067748 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-14 00:01:18.067760 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-14 00:01:18.067771 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-14 00:01:18.067790 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-14 00:01:18.067802 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-14 00:01:18.067813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-14 00:01:18.067831 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-14 00:01:18.067848 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-14 00:01:18.067860 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-14 00:01:18.067907 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-14 00:01:18.067920 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-14 00:01:18.067931 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-14 00:01:18.067942 | orchestrator | 2025-05-14 00:01:18.067953 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2025-05-14 00:01:18.067965 | orchestrator | Tuesday 13 May 2025 23:59:08 +0000 (0:00:05.202) 0:02:41.556 *********** 2025-05-14 00:01:18.067976 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-05-14 00:01:18.067995 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-05-14 00:01:18.068006 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-05-14 00:01:18.068017 | orchestrator | 2025-05-14 00:01:18.068028 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2025-05-14 00:01:18.068039 | orchestrator | Tuesday 13 May 2025 23:59:09 +0000 (0:00:01.599) 0:02:43.156 *********** 2025-05-14 00:01:18.068050 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-14 00:01:18.068068 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-14 00:01:18.068088 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-14 00:01:18.068100 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-14 00:01:18.068111 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-14 00:01:18.068132 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-14 00:01:18.068143 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-14 00:01:18.068159 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-14 00:01:18.068171 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-14 00:01:18.068189 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-14 00:01:18.068231 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-14 00:01:18.068254 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-14 00:01:18.068266 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-14 00:01:18.068283 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-14 00:01:18.068295 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-14 00:01:18.068306 | orchestrator | 2025-05-14 00:01:18.068317 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2025-05-14 00:01:18.068328 | orchestrator | Tuesday 13 May 2025 23:59:26 +0000 (0:00:16.371) 0:02:59.527 *********** 2025-05-14 00:01:18.068339 | orchestrator | changed: [testbed-node-0] 2025-05-14 00:01:18.068351 | orchestrator | changed: [testbed-node-1] 2025-05-14 00:01:18.068362 | orchestrator | changed: [testbed-node-2] 2025-05-14 00:01:18.068372 | orchestrator | 2025-05-14 00:01:18.068383 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2025-05-14 00:01:18.068394 | orchestrator | Tuesday 13 May 2025 23:59:27 +0000 (0:00:01.425) 0:03:00.953 *********** 2025-05-14 00:01:18.068406 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-05-14 00:01:18.068417 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-05-14 00:01:18.068435 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-05-14 00:01:18.068446 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-05-14 00:01:18.068457 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-05-14 00:01:18.068468 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-05-14 00:01:18.068479 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-05-14 00:01:18.068490 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-05-14 00:01:18.068500 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-05-14 00:01:18.068522 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-05-14 00:01:18.068533 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-05-14 00:01:18.068544 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-05-14 00:01:18.068554 | orchestrator | 2025-05-14 00:01:18.068565 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2025-05-14 00:01:18.068581 | orchestrator | Tuesday 13 May 2025 23:59:33 +0000 (0:00:05.471) 0:03:06.425 *********** 2025-05-14 00:01:18.068601 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-05-14 00:01:18.068620 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-05-14 00:01:18.068640 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-05-14 00:01:18.068659 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-05-14 00:01:18.068675 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-05-14 00:01:18.068686 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-05-14 00:01:18.068697 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-05-14 00:01:18.068707 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-05-14 00:01:18.068718 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-05-14 00:01:18.068729 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-05-14 00:01:18.068739 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-05-14 00:01:18.068750 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-05-14 00:01:18.068760 | orchestrator | 2025-05-14 00:01:18.068771 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2025-05-14 00:01:18.068781 | orchestrator | Tuesday 13 May 2025 23:59:38 +0000 (0:00:05.332) 0:03:11.758 *********** 2025-05-14 00:01:18.068792 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-05-14 00:01:18.068802 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-05-14 00:01:18.068813 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-05-14 00:01:18.068823 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-05-14 00:01:18.068834 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-05-14 00:01:18.068845 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-05-14 00:01:18.068855 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-05-14 00:01:18.068865 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-05-14 00:01:18.068876 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-05-14 00:01:18.068887 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-05-14 00:01:18.068897 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-05-14 00:01:18.068908 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-05-14 00:01:18.068918 | orchestrator | 2025-05-14 00:01:18.068928 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2025-05-14 00:01:18.068939 | orchestrator | Tuesday 13 May 2025 23:59:43 +0000 (0:00:05.347) 0:03:17.105 *********** 2025-05-14 00:01:18.068956 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-14 00:01:18.068984 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-14 00:01:18.068996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-14 00:01:18.069008 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-14 00:01:18.069019 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-14 00:01:18.069036 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-14 00:01:18.069048 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-14 00:01:18.069071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-14 00:01:18.069083 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-14 00:01:18.069095 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-14 00:01:18.069106 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-14 00:01:18.069118 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-14 00:01:18.069152 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-14 00:01:18.069171 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-14 00:01:18.069189 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-14 00:01:18.069228 | orchestrator | 2025-05-14 00:01:18.069248 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-05-14 00:01:18.069265 | orchestrator | Tuesday 13 May 2025 23:59:47 +0000 (0:00:03.877) 0:03:20.983 *********** 2025-05-14 00:01:18.069277 | orchestrator | skipping: [testbed-node-0] 2025-05-14 00:01:18.069288 | orchestrator | skipping: [testbed-node-1] 2025-05-14 00:01:18.069298 | orchestrator | skipping: [testbed-node-2] 2025-05-14 00:01:18.069309 | orchestrator | 2025-05-14 00:01:18.069320 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2025-05-14 00:01:18.069331 | orchestrator | Tuesday 13 May 2025 23:59:47 +0000 (0:00:00.266) 0:03:21.250 *********** 2025-05-14 00:01:18.069341 | orchestrator | changed: [testbed-node-0] 2025-05-14 00:01:18.069352 | orchestrator | 2025-05-14 00:01:18.069363 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2025-05-14 00:01:18.069374 | orchestrator | Tuesday 13 May 2025 23:59:50 +0000 (0:00:02.227) 0:03:23.478 *********** 2025-05-14 00:01:18.069384 | orchestrator | changed: [testbed-node-0] 2025-05-14 00:01:18.069395 | orchestrator | 2025-05-14 00:01:18.069406 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2025-05-14 00:01:18.069417 | orchestrator | Tuesday 13 May 2025 23:59:52 +0000 (0:00:01.950) 0:03:25.428 *********** 2025-05-14 00:01:18.069427 | orchestrator | changed: [testbed-node-0] 2025-05-14 00:01:18.069438 | orchestrator | 2025-05-14 00:01:18.069449 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2025-05-14 00:01:18.069460 | orchestrator | Tuesday 13 May 2025 23:59:54 +0000 (0:00:02.038) 0:03:27.467 *********** 2025-05-14 00:01:18.069470 | orchestrator | changed: [testbed-node-0] 2025-05-14 00:01:18.069481 | orchestrator | 2025-05-14 00:01:18.069492 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2025-05-14 00:01:18.069502 | orchestrator | Tuesday 13 May 2025 23:59:56 +0000 (0:00:02.005) 0:03:29.472 *********** 2025-05-14 00:01:18.069513 | orchestrator | changed: [testbed-node-0] 2025-05-14 00:01:18.069523 | orchestrator | 2025-05-14 00:01:18.069534 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-05-14 00:01:18.069545 | orchestrator | Wednesday 14 May 2025 00:00:19 +0000 (0:00:23.592) 0:03:53.065 ********* 2025-05-14 00:01:18.069556 | orchestrator | 2025-05-14 00:01:18.069566 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-05-14 00:01:18.069577 | orchestrator | Wednesday 14 May 2025 00:00:19 +0000 (0:00:00.073) 0:03:53.138 ********* 2025-05-14 00:01:18.069587 | orchestrator | 2025-05-14 00:01:18.069598 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-05-14 00:01:18.069613 | orchestrator | Wednesday 14 May 2025 00:00:19 +0000 (0:00:00.069) 0:03:53.208 ********* 2025-05-14 00:01:18.069632 | orchestrator | 2025-05-14 00:01:18.069651 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2025-05-14 00:01:18.069680 | orchestrator | Wednesday 14 May 2025 00:00:20 +0000 (0:00:00.069) 0:03:53.277 ********* 2025-05-14 00:01:18.069700 | orchestrator | changed: [testbed-node-0] 2025-05-14 00:01:18.069719 | orchestrator | changed: [testbed-node-1] 2025-05-14 00:01:18.069738 | orchestrator | changed: [testbed-node-2] 2025-05-14 00:01:18.069755 | orchestrator | 2025-05-14 00:01:18.069767 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2025-05-14 00:01:18.069777 | orchestrator | Wednesday 14 May 2025 00:00:37 +0000 (0:00:17.237) 0:04:10.515 ********* 2025-05-14 00:01:18.069788 | orchestrator | changed: [testbed-node-0] 2025-05-14 00:01:18.069799 | orchestrator | changed: [testbed-node-1] 2025-05-14 00:01:18.069810 | orchestrator | changed: [testbed-node-2] 2025-05-14 00:01:18.069820 | orchestrator | 2025-05-14 00:01:18.069831 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2025-05-14 00:01:18.069842 | orchestrator | Wednesday 14 May 2025 00:00:49 +0000 (0:00:12.268) 0:04:22.784 ********* 2025-05-14 00:01:18.069853 | orchestrator | changed: [testbed-node-1] 2025-05-14 00:01:18.069864 | orchestrator | changed: [testbed-node-0] 2025-05-14 00:01:18.069875 | orchestrator | changed: [testbed-node-2] 2025-05-14 00:01:18.069885 | orchestrator | 2025-05-14 00:01:18.069896 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2025-05-14 00:01:18.069914 | orchestrator | Wednesday 14 May 2025 00:01:00 +0000 (0:00:10.685) 0:04:33.470 ********* 2025-05-14 00:01:18.069948 | orchestrator | changed: [testbed-node-0] 2025-05-14 00:01:18.069960 | orchestrator | changed: [testbed-node-2] 2025-05-14 00:01:18.069971 | orchestrator | changed: [testbed-node-1] 2025-05-14 00:01:18.069982 | orchestrator | 2025-05-14 00:01:18.069992 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2025-05-14 00:01:18.070003 | orchestrator | Wednesday 14 May 2025 00:01:10 +0000 (0:00:10.254) 0:04:43.724 ********* 2025-05-14 00:01:18.070014 | orchestrator | changed: [testbed-node-0] 2025-05-14 00:01:18.070063 | orchestrator | changed: [testbed-node-1] 2025-05-14 00:01:18.070074 | orchestrator | changed: [testbed-node-2] 2025-05-14 00:01:18.070085 | orchestrator | 2025-05-14 00:01:18.070096 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 00:01:18.070107 | orchestrator | testbed-node-0 : ok=57  changed=39  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-14 00:01:18.070118 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-14 00:01:18.070129 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-14 00:01:18.070140 | orchestrator | 2025-05-14 00:01:18.070151 | orchestrator | 2025-05-14 00:01:18.070162 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 00:01:18.070172 | orchestrator | Wednesday 14 May 2025 00:01:15 +0000 (0:00:05.176) 0:04:48.900 ********* 2025-05-14 00:01:18.070257 | orchestrator | =============================================================================== 2025-05-14 00:01:18.070273 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 23.59s 2025-05-14 00:01:18.070284 | orchestrator | octavia : Restart octavia-api container -------------------------------- 17.24s 2025-05-14 00:01:18.070295 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 16.37s 2025-05-14 00:01:18.070306 | orchestrator | octavia : Add rules for security groups -------------------------------- 15.60s 2025-05-14 00:01:18.070317 | orchestrator | octavia : Adding octavia related roles --------------------------------- 15.41s 2025-05-14 00:01:18.070328 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 12.27s 2025-05-14 00:01:18.070338 | orchestrator | octavia : Restart octavia-health-manager container --------------------- 10.69s 2025-05-14 00:01:18.070349 | orchestrator | octavia : Restart octavia-housekeeping container ----------------------- 10.25s 2025-05-14 00:01:18.070369 | orchestrator | octavia : Create security groups for octavia --------------------------- 10.13s 2025-05-14 00:01:18.070380 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 7.73s 2025-05-14 00:01:18.070390 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.53s 2025-05-14 00:01:18.070401 | orchestrator | octavia : Get security groups for octavia ------------------------------- 6.39s 2025-05-14 00:01:18.070412 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.06s 2025-05-14 00:01:18.070422 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.60s 2025-05-14 00:01:18.070433 | orchestrator | octavia : Update Octavia health manager port host_id -------------------- 5.58s 2025-05-14 00:01:18.070443 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.47s 2025-05-14 00:01:18.070454 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.45s 2025-05-14 00:01:18.070464 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 5.35s 2025-05-14 00:01:18.070475 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.33s 2025-05-14 00:01:18.070486 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 5.33s 2025-05-14 00:01:18.070497 | orchestrator | 2025-05-14 00:01:18 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-14 00:01:21.112346 | orchestrator | 2025-05-14 00:01:21 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-14 00:01:24.154264 | orchestrator | 2025-05-14 00:01:24 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-14 00:01:27.212526 | orchestrator | 2025-05-14 00:01:27 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-14 00:01:30.249290 | orchestrator | 2025-05-14 00:01:30 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-14 00:01:33.288615 | orchestrator | 2025-05-14 00:01:33 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-14 00:01:36.333654 | orchestrator | 2025-05-14 00:01:36 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-14 00:01:39.385355 | orchestrator | 2025-05-14 00:01:39 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-14 00:01:42.432083 | orchestrator | 2025-05-14 00:01:42 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-14 00:01:45.473544 | orchestrator | 2025-05-14 00:01:45 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-14 00:01:48.519323 | orchestrator | 2025-05-14 00:01:48 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-14 00:01:51.566685 | orchestrator | 2025-05-14 00:01:51 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-14 00:01:54.614303 | orchestrator | 2025-05-14 00:01:54 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-14 00:01:57.666387 | orchestrator | 2025-05-14 00:01:57 | INFO  | Task 34328e2f-5ee2-4db2-b83e-e70ce48e2167 is in state STARTED 2025-05-14 00:01:57.666488 | orchestrator | 2025-05-14 00:01:57 | INFO  | Wait 1 second(s) until the next check 2025-05-14 00:02:00.721781 | orchestrator | 2025-05-14 00:02:00 | INFO  | Task 34328e2f-5ee2-4db2-b83e-e70ce48e2167 is in state STARTED 2025-05-14 00:02:00.721852 | orchestrator | 2025-05-14 00:02:00 | INFO  | Wait 1 second(s) until the next check 2025-05-14 00:02:03.778800 | orchestrator | 2025-05-14 00:02:03 | INFO  | Task 34328e2f-5ee2-4db2-b83e-e70ce48e2167 is in state STARTED 2025-05-14 00:02:03.778906 | orchestrator | 2025-05-14 00:02:03 | INFO  | Wait 1 second(s) until the next check 2025-05-14 00:02:06.844867 | orchestrator | 2025-05-14 00:02:06 | INFO  | Task 34328e2f-5ee2-4db2-b83e-e70ce48e2167 is in state STARTED 2025-05-14 00:02:06.845032 | orchestrator | 2025-05-14 00:02:06 | INFO  | Wait 1 second(s) until the next check 2025-05-14 00:02:09.910966 | orchestrator | 2025-05-14 00:02:09 | INFO  | Task 34328e2f-5ee2-4db2-b83e-e70ce48e2167 is in state STARTED 2025-05-14 00:02:09.911580 | orchestrator | 2025-05-14 00:02:09 | INFO  | Wait 1 second(s) until the next check 2025-05-14 00:02:12.965920 | orchestrator | 2025-05-14 00:02:12 | INFO  | Task 34328e2f-5ee2-4db2-b83e-e70ce48e2167 is in state STARTED 2025-05-14 00:02:12.966010 | orchestrator | 2025-05-14 00:02:12 | INFO  | Wait 1 second(s) until the next check 2025-05-14 00:02:16.025922 | orchestrator | 2025-05-14 00:02:16 | INFO  | Task 34328e2f-5ee2-4db2-b83e-e70ce48e2167 is in state STARTED 2025-05-14 00:02:16.026087 | orchestrator | 2025-05-14 00:02:16 | INFO  | Wait 1 second(s) until the next check 2025-05-14 00:02:19.069844 | orchestrator | 2025-05-14 00:02:19 | INFO  | Task 34328e2f-5ee2-4db2-b83e-e70ce48e2167 is in state SUCCESS 2025-05-14 00:02:19.069929 | orchestrator | 2025-05-14 00:02:19 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-14 00:02:22.119065 | orchestrator | 2025-05-14 00:02:22 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-14 00:02:25.160717 | orchestrator | 2025-05-14 00:02:25 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-14 00:02:28.206547 | orchestrator | 2025-05-14 00:02:28 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-14 00:02:31.255435 | orchestrator | 2025-05-14 00:02:31 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-14 00:02:34.304666 | orchestrator | 2025-05-14 00:02:34 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-14 00:02:37.349859 | orchestrator | 2025-05-14 00:02:37 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-14 00:02:40.401822 | orchestrator | 2025-05-14 00:02:40.401935 | orchestrator | None 2025-05-14 00:02:40.661457 | orchestrator | 2025-05-14 00:02:40.666291 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Wed May 14 00:02:40 UTC 2025 2025-05-14 00:02:40.666394 | orchestrator | 2025-05-14 00:02:41.000061 | orchestrator | ok: Runtime: 0:36:42.514899 2025-05-14 00:02:41.261632 | 2025-05-14 00:02:41.261777 | TASK [Bootstrap services] 2025-05-14 00:02:42.103168 | orchestrator | 2025-05-14 00:02:42.103297 | orchestrator | # BOOTSTRAP 2025-05-14 00:02:42.103306 | orchestrator | 2025-05-14 00:02:42.103312 | orchestrator | + set -e 2025-05-14 00:02:42.103325 | orchestrator | + echo 2025-05-14 00:02:42.103331 | orchestrator | + echo '# BOOTSTRAP' 2025-05-14 00:02:42.103337 | orchestrator | + echo 2025-05-14 00:02:42.103360 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2025-05-14 00:02:42.108658 | orchestrator | + set -e 2025-05-14 00:02:42.108701 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2025-05-14 00:02:43.900364 | orchestrator | 2025-05-14 00:02:43 | INFO  | It takes a moment until task 10a5e393-04e5-48d6-aa36-510224008391 (flavor-manager) has been started and output is visible here. 2025-05-14 00:02:48.011509 | orchestrator | 2025-05-14 00:02:48 | INFO  | Flavor SCS-1V-4 created 2025-05-14 00:02:48.179729 | orchestrator | 2025-05-14 00:02:48 | INFO  | Flavor SCS-2V-8 created 2025-05-14 00:02:48.485918 | orchestrator | 2025-05-14 00:02:48 | INFO  | Flavor SCS-4V-16 created 2025-05-14 00:02:48.649972 | orchestrator | 2025-05-14 00:02:48 | INFO  | Flavor SCS-8V-32 created 2025-05-14 00:02:48.766299 | orchestrator | 2025-05-14 00:02:48 | INFO  | Flavor SCS-1V-2 created 2025-05-14 00:02:48.894244 | orchestrator | 2025-05-14 00:02:48 | INFO  | Flavor SCS-2V-4 created 2025-05-14 00:02:49.038406 | orchestrator | 2025-05-14 00:02:49 | INFO  | Flavor SCS-4V-8 created 2025-05-14 00:02:49.169729 | orchestrator | 2025-05-14 00:02:49 | INFO  | Flavor SCS-8V-16 created 2025-05-14 00:02:49.315090 | orchestrator | 2025-05-14 00:02:49 | INFO  | Flavor SCS-16V-32 created 2025-05-14 00:02:49.464119 | orchestrator | 2025-05-14 00:02:49 | INFO  | Flavor SCS-1V-8 created 2025-05-14 00:02:49.580333 | orchestrator | 2025-05-14 00:02:49 | INFO  | Flavor SCS-2V-16 created 2025-05-14 00:02:49.708535 | orchestrator | 2025-05-14 00:02:49 | INFO  | Flavor SCS-4V-32 created 2025-05-14 00:02:49.835420 | orchestrator | 2025-05-14 00:02:49 | INFO  | Flavor SCS-1L-1 created 2025-05-14 00:02:49.962333 | orchestrator | 2025-05-14 00:02:49 | INFO  | Flavor SCS-2V-4-20s created 2025-05-14 00:02:50.102755 | orchestrator | 2025-05-14 00:02:50 | INFO  | Flavor SCS-4V-16-100s created 2025-05-14 00:02:50.234640 | orchestrator | 2025-05-14 00:02:50 | INFO  | Flavor SCS-1V-4-10 created 2025-05-14 00:02:50.341698 | orchestrator | 2025-05-14 00:02:50 | INFO  | Flavor SCS-2V-8-20 created 2025-05-14 00:02:50.489240 | orchestrator | 2025-05-14 00:02:50 | INFO  | Flavor SCS-4V-16-50 created 2025-05-14 00:02:50.628633 | orchestrator | 2025-05-14 00:02:50 | INFO  | Flavor SCS-8V-32-100 created 2025-05-14 00:02:50.766764 | orchestrator | 2025-05-14 00:02:50 | INFO  | Flavor SCS-1V-2-5 created 2025-05-14 00:02:50.907276 | orchestrator | 2025-05-14 00:02:50 | INFO  | Flavor SCS-2V-4-10 created 2025-05-14 00:02:51.049519 | orchestrator | 2025-05-14 00:02:51 | INFO  | Flavor SCS-4V-8-20 created 2025-05-14 00:02:51.209476 | orchestrator | 2025-05-14 00:02:51 | INFO  | Flavor SCS-8V-16-50 created 2025-05-14 00:02:51.371022 | orchestrator | 2025-05-14 00:02:51 | INFO  | Flavor SCS-16V-32-100 created 2025-05-14 00:02:51.497693 | orchestrator | 2025-05-14 00:02:51 | INFO  | Flavor SCS-1V-8-20 created 2025-05-14 00:02:51.627910 | orchestrator | 2025-05-14 00:02:51 | INFO  | Flavor SCS-2V-16-50 created 2025-05-14 00:02:51.764312 | orchestrator | 2025-05-14 00:02:51 | INFO  | Flavor SCS-4V-32-100 created 2025-05-14 00:02:51.898694 | orchestrator | 2025-05-14 00:02:51 | INFO  | Flavor SCS-1L-1-5 created 2025-05-14 00:02:54.270278 | orchestrator | 2025-05-14 00:02:54 | INFO  | Trying to run play bootstrap-basic in environment openstack 2025-05-14 00:02:54.338981 | orchestrator | 2025-05-14 00:02:54 | INFO  | Task bf54a58d-8450-4f88-96b5-94053477d6c4 (bootstrap-basic) was prepared for execution. 2025-05-14 00:02:54.339175 | orchestrator | 2025-05-14 00:02:54 | INFO  | It takes a moment until task bf54a58d-8450-4f88-96b5-94053477d6c4 (bootstrap-basic) has been started and output is visible here. 2025-05-14 00:02:58.393632 | orchestrator | 2025-05-14 00:02:58.394363 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2025-05-14 00:02:58.395631 | orchestrator | 2025-05-14 00:02:58.397235 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-14 00:02:58.399195 | orchestrator | Wednesday 14 May 2025 00:02:58 +0000 (0:00:00.073) 0:00:00.073 ********* 2025-05-14 00:03:00.259562 | orchestrator | ok: [localhost] 2025-05-14 00:03:00.260535 | orchestrator | 2025-05-14 00:03:00.260576 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2025-05-14 00:03:00.260849 | orchestrator | Wednesday 14 May 2025 00:03:00 +0000 (0:00:01.868) 0:00:01.942 ********* 2025-05-14 00:03:10.083244 | orchestrator | ok: [localhost] 2025-05-14 00:03:10.083472 | orchestrator | 2025-05-14 00:03:10.084361 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2025-05-14 00:03:10.084835 | orchestrator | Wednesday 14 May 2025 00:03:10 +0000 (0:00:09.823) 0:00:11.765 ********* 2025-05-14 00:03:17.602443 | orchestrator | changed: [localhost] 2025-05-14 00:03:17.602602 | orchestrator | 2025-05-14 00:03:17.602685 | orchestrator | TASK [Get volume type local] *************************************************** 2025-05-14 00:03:17.605590 | orchestrator | Wednesday 14 May 2025 00:03:17 +0000 (0:00:07.518) 0:00:19.283 ********* 2025-05-14 00:03:24.898722 | orchestrator | ok: [localhost] 2025-05-14 00:03:24.899380 | orchestrator | 2025-05-14 00:03:24.900108 | orchestrator | TASK [Create volume type local] ************************************************ 2025-05-14 00:03:24.903374 | orchestrator | Wednesday 14 May 2025 00:03:24 +0000 (0:00:07.297) 0:00:26.581 ********* 2025-05-14 00:03:31.427038 | orchestrator | changed: [localhost] 2025-05-14 00:03:31.428410 | orchestrator | 2025-05-14 00:03:31.428557 | orchestrator | TASK [Create public network] *************************************************** 2025-05-14 00:03:31.431678 | orchestrator | Wednesday 14 May 2025 00:03:31 +0000 (0:00:06.527) 0:00:33.108 ********* 2025-05-14 00:03:36.449366 | orchestrator | changed: [localhost] 2025-05-14 00:03:36.449479 | orchestrator | 2025-05-14 00:03:36.449565 | orchestrator | TASK [Set public network to default] ******************************************* 2025-05-14 00:03:36.450468 | orchestrator | Wednesday 14 May 2025 00:03:36 +0000 (0:00:05.022) 0:00:38.130 ********* 2025-05-14 00:03:42.704896 | orchestrator | changed: [localhost] 2025-05-14 00:03:42.706537 | orchestrator | 2025-05-14 00:03:42.707170 | orchestrator | TASK [Create public subnet] **************************************************** 2025-05-14 00:03:42.709499 | orchestrator | Wednesday 14 May 2025 00:03:42 +0000 (0:00:06.253) 0:00:44.383 ********* 2025-05-14 00:03:47.020211 | orchestrator | changed: [localhost] 2025-05-14 00:03:47.021225 | orchestrator | 2025-05-14 00:03:47.022129 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2025-05-14 00:03:47.023161 | orchestrator | Wednesday 14 May 2025 00:03:47 +0000 (0:00:04.318) 0:00:48.702 ********* 2025-05-14 00:03:50.662593 | orchestrator | changed: [localhost] 2025-05-14 00:03:50.662694 | orchestrator | 2025-05-14 00:03:50.664390 | orchestrator | TASK [Create manager role] ***************************************************** 2025-05-14 00:03:50.664646 | orchestrator | Wednesday 14 May 2025 00:03:50 +0000 (0:00:03.642) 0:00:52.345 ********* 2025-05-14 00:03:54.314603 | orchestrator | ok: [localhost] 2025-05-14 00:03:54.315100 | orchestrator | 2025-05-14 00:03:54.316422 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 00:03:54.316858 | orchestrator | 2025-05-14 00:03:54 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-14 00:03:54.317521 | orchestrator | 2025-05-14 00:03:54 | INFO  | Please wait and do not abort execution. 2025-05-14 00:03:54.319715 | orchestrator | localhost : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 00:03:54.320845 | orchestrator | 2025-05-14 00:03:54.321610 | orchestrator | 2025-05-14 00:03:54.322400 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 00:03:54.323366 | orchestrator | Wednesday 14 May 2025 00:03:54 +0000 (0:00:03.650) 0:00:55.996 ********* 2025-05-14 00:03:54.324700 | orchestrator | =============================================================================== 2025-05-14 00:03:54.325995 | orchestrator | Get volume type LUKS ---------------------------------------------------- 9.82s 2025-05-14 00:03:54.326660 | orchestrator | Create volume type LUKS ------------------------------------------------- 7.52s 2025-05-14 00:03:54.327432 | orchestrator | Get volume type local --------------------------------------------------- 7.30s 2025-05-14 00:03:54.328645 | orchestrator | Create volume type local ------------------------------------------------ 6.53s 2025-05-14 00:03:54.329631 | orchestrator | Set public network to default ------------------------------------------- 6.25s 2025-05-14 00:03:54.330324 | orchestrator | Create public network --------------------------------------------------- 5.02s 2025-05-14 00:03:54.330817 | orchestrator | Create public subnet ---------------------------------------------------- 4.32s 2025-05-14 00:03:54.331434 | orchestrator | Create manager role ----------------------------------------------------- 3.65s 2025-05-14 00:03:54.331732 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.64s 2025-05-14 00:03:54.332353 | orchestrator | Gathering Facts --------------------------------------------------------- 1.87s 2025-05-14 00:03:56.708442 | orchestrator | 2025-05-14 00:03:56 | INFO  | It takes a moment until task d2b1d5d4-0059-4825-a005-e95d25a22b7f (image-manager) has been started and output is visible here. 2025-05-14 00:04:00.087205 | orchestrator | 2025-05-14 00:04:00 | INFO  | Processing image 'Cirros 0.6.2' 2025-05-14 00:04:00.291837 | orchestrator | 2025-05-14 00:04:00 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2025-05-14 00:04:00.292300 | orchestrator | 2025-05-14 00:04:00 | INFO  | Importing image Cirros 0.6.2 2025-05-14 00:04:00.293470 | orchestrator | 2025-05-14 00:04:00 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-05-14 00:04:02.012984 | orchestrator | 2025-05-14 00:04:02 | INFO  | Waiting for image to leave queued state... 2025-05-14 00:04:04.237534 | orchestrator | 2025-05-14 00:04:04 | INFO  | Waiting for import to complete... 2025-05-14 00:04:14.605342 | orchestrator | 2025-05-14 00:04:14 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2025-05-14 00:04:14.784202 | orchestrator | 2025-05-14 00:04:14 | INFO  | Checking parameters of 'Cirros 0.6.2' 2025-05-14 00:04:14.784487 | orchestrator | 2025-05-14 00:04:14 | INFO  | Setting internal_version = 0.6.2 2025-05-14 00:04:14.785247 | orchestrator | 2025-05-14 00:04:14 | INFO  | Setting image_original_user = cirros 2025-05-14 00:04:14.786131 | orchestrator | 2025-05-14 00:04:14 | INFO  | Adding tag os:cirros 2025-05-14 00:04:15.076044 | orchestrator | 2025-05-14 00:04:15 | INFO  | Setting property architecture: x86_64 2025-05-14 00:04:15.368980 | orchestrator | 2025-05-14 00:04:15 | INFO  | Setting property hw_disk_bus: scsi 2025-05-14 00:04:15.634651 | orchestrator | 2025-05-14 00:04:15 | INFO  | Setting property hw_rng_model: virtio 2025-05-14 00:04:15.812504 | orchestrator | 2025-05-14 00:04:15 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-05-14 00:04:16.000276 | orchestrator | 2025-05-14 00:04:15 | INFO  | Setting property hw_watchdog_action: reset 2025-05-14 00:04:16.205838 | orchestrator | 2025-05-14 00:04:16 | INFO  | Setting property hypervisor_type: qemu 2025-05-14 00:04:16.400292 | orchestrator | 2025-05-14 00:04:16 | INFO  | Setting property os_distro: cirros 2025-05-14 00:04:16.567165 | orchestrator | 2025-05-14 00:04:16 | INFO  | Setting property replace_frequency: never 2025-05-14 00:04:16.785441 | orchestrator | 2025-05-14 00:04:16 | INFO  | Setting property uuid_validity: none 2025-05-14 00:04:16.976876 | orchestrator | 2025-05-14 00:04:16 | INFO  | Setting property provided_until: none 2025-05-14 00:04:17.146375 | orchestrator | 2025-05-14 00:04:17 | INFO  | Setting property image_description: Cirros 2025-05-14 00:04:17.337206 | orchestrator | 2025-05-14 00:04:17 | INFO  | Setting property image_name: Cirros 2025-05-14 00:04:17.519436 | orchestrator | 2025-05-14 00:04:17 | INFO  | Setting property internal_version: 0.6.2 2025-05-14 00:04:17.728635 | orchestrator | 2025-05-14 00:04:17 | INFO  | Setting property image_original_user: cirros 2025-05-14 00:04:17.909360 | orchestrator | 2025-05-14 00:04:17 | INFO  | Setting property os_version: 0.6.2 2025-05-14 00:04:18.144594 | orchestrator | 2025-05-14 00:04:18 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-05-14 00:04:18.353644 | orchestrator | 2025-05-14 00:04:18 | INFO  | Setting property image_build_date: 2023-05-30 2025-05-14 00:04:18.568387 | orchestrator | 2025-05-14 00:04:18 | INFO  | Checking status of 'Cirros 0.6.2' 2025-05-14 00:04:18.571996 | orchestrator | 2025-05-14 00:04:18 | INFO  | Checking visibility of 'Cirros 0.6.2' 2025-05-14 00:04:18.572320 | orchestrator | 2025-05-14 00:04:18 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2025-05-14 00:04:18.772880 | orchestrator | 2025-05-14 00:04:18 | INFO  | Processing image 'Cirros 0.6.3' 2025-05-14 00:04:18.958530 | orchestrator | 2025-05-14 00:04:18 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2025-05-14 00:04:18.959454 | orchestrator | 2025-05-14 00:04:18 | INFO  | Importing image Cirros 0.6.3 2025-05-14 00:04:18.960792 | orchestrator | 2025-05-14 00:04:18 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-05-14 00:04:20.482459 | orchestrator | 2025-05-14 00:04:20 | INFO  | Waiting for image to leave queued state... 2025-05-14 00:04:22.517788 | orchestrator | 2025-05-14 00:04:22 | INFO  | Waiting for import to complete... 2025-05-14 00:04:32.660601 | orchestrator | 2025-05-14 00:04:32 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2025-05-14 00:04:33.103123 | orchestrator | 2025-05-14 00:04:33 | INFO  | Checking parameters of 'Cirros 0.6.3' 2025-05-14 00:04:33.103224 | orchestrator | 2025-05-14 00:04:33 | INFO  | Setting internal_version = 0.6.3 2025-05-14 00:04:33.103237 | orchestrator | 2025-05-14 00:04:33 | INFO  | Setting image_original_user = cirros 2025-05-14 00:04:33.103265 | orchestrator | 2025-05-14 00:04:33 | INFO  | Adding tag os:cirros 2025-05-14 00:04:33.308454 | orchestrator | 2025-05-14 00:04:33 | INFO  | Setting property architecture: x86_64 2025-05-14 00:04:33.518842 | orchestrator | 2025-05-14 00:04:33 | INFO  | Setting property hw_disk_bus: scsi 2025-05-14 00:04:33.804802 | orchestrator | 2025-05-14 00:04:33 | INFO  | Setting property hw_rng_model: virtio 2025-05-14 00:04:33.981587 | orchestrator | 2025-05-14 00:04:33 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-05-14 00:04:34.194790 | orchestrator | 2025-05-14 00:04:34 | INFO  | Setting property hw_watchdog_action: reset 2025-05-14 00:04:34.391428 | orchestrator | 2025-05-14 00:04:34 | INFO  | Setting property hypervisor_type: qemu 2025-05-14 00:04:34.581711 | orchestrator | 2025-05-14 00:04:34 | INFO  | Setting property os_distro: cirros 2025-05-14 00:04:34.770949 | orchestrator | 2025-05-14 00:04:34 | INFO  | Setting property replace_frequency: never 2025-05-14 00:04:34.941181 | orchestrator | 2025-05-14 00:04:34 | INFO  | Setting property uuid_validity: none 2025-05-14 00:04:35.132674 | orchestrator | 2025-05-14 00:04:35 | INFO  | Setting property provided_until: none 2025-05-14 00:04:35.378494 | orchestrator | 2025-05-14 00:04:35 | INFO  | Setting property image_description: Cirros 2025-05-14 00:04:35.604917 | orchestrator | 2025-05-14 00:04:35 | INFO  | Setting property image_name: Cirros 2025-05-14 00:04:35.828838 | orchestrator | 2025-05-14 00:04:35 | INFO  | Setting property internal_version: 0.6.3 2025-05-14 00:04:36.268296 | orchestrator | 2025-05-14 00:04:36 | INFO  | Setting property image_original_user: cirros 2025-05-14 00:04:36.492303 | orchestrator | 2025-05-14 00:04:36 | INFO  | Setting property os_version: 0.6.3 2025-05-14 00:04:36.725447 | orchestrator | 2025-05-14 00:04:36 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-05-14 00:04:36.951133 | orchestrator | 2025-05-14 00:04:36 | INFO  | Setting property image_build_date: 2024-09-26 2025-05-14 00:04:37.144893 | orchestrator | 2025-05-14 00:04:37 | INFO  | Checking status of 'Cirros 0.6.3' 2025-05-14 00:04:37.145072 | orchestrator | 2025-05-14 00:04:37 | INFO  | Checking visibility of 'Cirros 0.6.3' 2025-05-14 00:04:37.145169 | orchestrator | 2025-05-14 00:04:37 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2025-05-14 00:04:38.160608 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2025-05-14 00:04:40.031054 | orchestrator | 2025-05-14 00:04:40 | INFO  | date: 2025-05-07 2025-05-14 00:04:40.031150 | orchestrator | 2025-05-14 00:04:40 | INFO  | image: octavia-amphora-haproxy-2024.2.20250507.qcow2 2025-05-14 00:04:40.031165 | orchestrator | 2025-05-14 00:04:40 | INFO  | url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250507.qcow2 2025-05-14 00:04:40.031200 | orchestrator | 2025-05-14 00:04:40 | INFO  | checksum_url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250507.qcow2.CHECKSUM 2025-05-14 00:04:40.054852 | orchestrator | 2025-05-14 00:04:40 | INFO  | checksum: c20b3eccc9fa67100ece69376214f12441dc8ba740779c4f796663f77ded808e 2025-05-14 00:04:40.130947 | orchestrator | 2025-05-14 00:04:40 | INFO  | It takes a moment until task 7b7f4816-eca8-4862-b6ad-b6d7ec37295d (image-manager) has been started and output is visible here. 2025-05-14 00:04:42.512267 | orchestrator | 2025-05-14 00:04:42 | INFO  | Processing image 'OpenStack Octavia Amphora 2025-05-07' 2025-05-14 00:04:42.526892 | orchestrator | 2025-05-14 00:04:42 | INFO  | Tested URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250507.qcow2: 200 2025-05-14 00:04:42.528037 | orchestrator | 2025-05-14 00:04:42 | INFO  | Importing image OpenStack Octavia Amphora 2025-05-07 2025-05-14 00:04:42.528190 | orchestrator | 2025-05-14 00:04:42 | INFO  | Importing from URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250507.qcow2 2025-05-14 00:04:42.925623 | orchestrator | 2025-05-14 00:04:42 | INFO  | Waiting for image to leave queued state... 2025-05-14 00:04:44.976409 | orchestrator | 2025-05-14 00:04:44 | INFO  | Waiting for import to complete... 2025-05-14 00:04:55.079395 | orchestrator | 2025-05-14 00:04:55 | INFO  | Waiting for import to complete... 2025-05-14 00:05:05.177499 | orchestrator | 2025-05-14 00:05:05 | INFO  | Waiting for import to complete... 2025-05-14 00:05:15.286803 | orchestrator | 2025-05-14 00:05:15 | INFO  | Waiting for import to complete... 2025-05-14 00:05:25.376629 | orchestrator | 2025-05-14 00:05:25 | INFO  | Waiting for import to complete... 2025-05-14 00:05:35.509037 | orchestrator | 2025-05-14 00:05:35 | INFO  | Import of 'OpenStack Octavia Amphora 2025-05-07' successfully completed, reloading images 2025-05-14 00:05:36.028261 | orchestrator | 2025-05-14 00:05:36 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2025-05-07' 2025-05-14 00:05:36.028478 | orchestrator | 2025-05-14 00:05:36 | INFO  | Setting internal_version = 2025-05-07 2025-05-14 00:05:36.029369 | orchestrator | 2025-05-14 00:05:36 | INFO  | Setting image_original_user = ubuntu 2025-05-14 00:05:36.031303 | orchestrator | 2025-05-14 00:05:36 | INFO  | Adding tag amphora 2025-05-14 00:05:36.244234 | orchestrator | 2025-05-14 00:05:36 | INFO  | Adding tag os:ubuntu 2025-05-14 00:05:36.411486 | orchestrator | 2025-05-14 00:05:36 | INFO  | Setting property architecture: x86_64 2025-05-14 00:05:36.740715 | orchestrator | 2025-05-14 00:05:36 | INFO  | Setting property hw_disk_bus: scsi 2025-05-14 00:05:36.919836 | orchestrator | 2025-05-14 00:05:36 | INFO  | Setting property hw_rng_model: virtio 2025-05-14 00:05:37.087567 | orchestrator | 2025-05-14 00:05:37 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-05-14 00:05:37.286006 | orchestrator | 2025-05-14 00:05:37 | INFO  | Setting property hw_watchdog_action: reset 2025-05-14 00:05:37.487852 | orchestrator | 2025-05-14 00:05:37 | INFO  | Setting property hypervisor_type: qemu 2025-05-14 00:05:37.676522 | orchestrator | 2025-05-14 00:05:37 | INFO  | Setting property os_distro: ubuntu 2025-05-14 00:05:37.879066 | orchestrator | 2025-05-14 00:05:37 | INFO  | Setting property replace_frequency: quarterly 2025-05-14 00:05:38.063710 | orchestrator | 2025-05-14 00:05:38 | INFO  | Setting property uuid_validity: last-1 2025-05-14 00:05:38.250348 | orchestrator | 2025-05-14 00:05:38 | INFO  | Setting property provided_until: none 2025-05-14 00:05:38.496647 | orchestrator | 2025-05-14 00:05:38 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2025-05-14 00:05:38.678277 | orchestrator | 2025-05-14 00:05:38 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2025-05-14 00:05:38.883375 | orchestrator | 2025-05-14 00:05:38 | INFO  | Setting property internal_version: 2025-05-07 2025-05-14 00:05:39.092891 | orchestrator | 2025-05-14 00:05:39 | INFO  | Setting property image_original_user: ubuntu 2025-05-14 00:05:39.349425 | orchestrator | 2025-05-14 00:05:39 | INFO  | Setting property os_version: 2025-05-07 2025-05-14 00:05:39.555116 | orchestrator | 2025-05-14 00:05:39 | INFO  | Setting property image_source: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250507.qcow2 2025-05-14 00:05:39.774175 | orchestrator | 2025-05-14 00:05:39 | INFO  | Setting property image_build_date: 2025-05-07 2025-05-14 00:05:39.982723 | orchestrator | 2025-05-14 00:05:39 | INFO  | Checking status of 'OpenStack Octavia Amphora 2025-05-07' 2025-05-14 00:05:39.984391 | orchestrator | 2025-05-14 00:05:39 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2025-05-07' 2025-05-14 00:05:40.175900 | orchestrator | 2025-05-14 00:05:40 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2025-05-14 00:05:40.176099 | orchestrator | 2025-05-14 00:05:40 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2025-05-14 00:05:40.177013 | orchestrator | 2025-05-14 00:05:40 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2025-05-14 00:05:40.177662 | orchestrator | 2025-05-14 00:05:40 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2025-05-14 00:05:40.984558 | orchestrator | ok: Runtime: 0:02:58.890366 2025-05-14 00:05:41.006482 | 2025-05-14 00:05:41.006597 | TASK [Run checks] 2025-05-14 00:05:41.737588 | orchestrator | + set -e 2025-05-14 00:05:41.737865 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-14 00:05:41.737904 | orchestrator | ++ export INTERACTIVE=false 2025-05-14 00:05:41.737968 | orchestrator | ++ INTERACTIVE=false 2025-05-14 00:05:41.737986 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-14 00:05:41.737998 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-14 00:05:41.738012 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-05-14 00:05:41.738598 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-05-14 00:05:41.744655 | orchestrator | 2025-05-14 00:05:41.744756 | orchestrator | # CHECK 2025-05-14 00:05:41.744774 | orchestrator | 2025-05-14 00:05:41.744787 | orchestrator | ++ export MANAGER_VERSION=latest 2025-05-14 00:05:41.744806 | orchestrator | ++ MANAGER_VERSION=latest 2025-05-14 00:05:41.744818 | orchestrator | + echo 2025-05-14 00:05:41.744830 | orchestrator | + echo '# CHECK' 2025-05-14 00:05:41.744841 | orchestrator | + echo 2025-05-14 00:05:41.744858 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-05-14 00:05:41.745154 | orchestrator | ++ semver latest 5.0.0 2025-05-14 00:05:41.804678 | orchestrator | 2025-05-14 00:05:41.804794 | orchestrator | ## Containers @ testbed-manager 2025-05-14 00:05:41.804819 | orchestrator | 2025-05-14 00:05:41.804842 | orchestrator | + [[ -1 -eq -1 ]] 2025-05-14 00:05:41.804902 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-05-14 00:05:41.804951 | orchestrator | + echo 2025-05-14 00:05:41.804975 | orchestrator | + echo '## Containers @ testbed-manager' 2025-05-14 00:05:41.804996 | orchestrator | + echo 2025-05-14 00:05:41.805015 | orchestrator | + osism container testbed-manager ps 2025-05-14 00:05:43.760708 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-05-14 00:05:43.760967 | orchestrator | 7e53357db0a1 registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_blackbox_exporter 2025-05-14 00:05:43.761016 | orchestrator | 7b7aba919551 registry.osism.tech/kolla/prometheus-alertmanager:2024.2 "dumb-init --single-…" 16 minutes ago Up 15 minutes prometheus_alertmanager 2025-05-14 00:05:43.761037 | orchestrator | ffb05e7c5fc0 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_cadvisor 2025-05-14 00:05:43.761079 | orchestrator | 0cd9da5851f9 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_node_exporter 2025-05-14 00:05:43.761092 | orchestrator | 7f472d096406 registry.osism.tech/kolla/prometheus-v2-server:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_server 2025-05-14 00:05:43.761110 | orchestrator | ed5ced50ac1c registry.osism.tech/osism/cephclient:reef "/usr/bin/dumb-init …" 19 minutes ago Up 18 minutes cephclient 2025-05-14 00:05:43.761122 | orchestrator | f6781772e0aa registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes cron 2025-05-14 00:05:43.761134 | orchestrator | f8cedec2937c registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes kolla_toolbox 2025-05-14 00:05:43.761145 | orchestrator | c83d28292705 phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 32 minutes ago Up 31 minutes (healthy) 80/tcp phpmyadmin 2025-05-14 00:05:43.761205 | orchestrator | 637c1a8bc3fd registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 32 minutes ago Up 32 minutes fluentd 2025-05-14 00:05:43.761218 | orchestrator | 41d91e0468af registry.osism.tech/osism/homer:v25.05.1 "/bin/sh /entrypoint…" 33 minutes ago Up 32 minutes (healthy) 8080/tcp homer 2025-05-14 00:05:43.761230 | orchestrator | a01f370a3c6c registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 58 minutes ago Up 57 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2025-05-14 00:05:43.761241 | orchestrator | c37bec0ca12d registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" About an hour ago Up About an hour (healthy) manager-inventory_reconciler-1 2025-05-14 00:05:43.761253 | orchestrator | 0e577485242d registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" About an hour ago Up About an hour (healthy) kolla-ansible 2025-05-14 00:05:43.761264 | orchestrator | 1e9963cacccb registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" About an hour ago Up About an hour (healthy) osism-ansible 2025-05-14 00:05:43.761297 | orchestrator | b964f4597b26 registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" About an hour ago Up About an hour (healthy) ceph-ansible 2025-05-14 00:05:43.761316 | orchestrator | db87ba6dbd8b registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" About an hour ago Up About an hour (healthy) osism-kubernetes 2025-05-14 00:05:43.761328 | orchestrator | 4c1565d8aad3 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" About an hour ago Up About an hour (healthy) 8000/tcp manager-ara-server-1 2025-05-14 00:05:43.761339 | orchestrator | 1499cf709d1e registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" About an hour ago Up About an hour (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2025-05-14 00:05:43.761351 | orchestrator | 2d72cecf85a1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" About an hour ago Up About an hour (healthy) manager-openstack-1 2025-05-14 00:05:43.761362 | orchestrator | 58761c1f4cd3 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" About an hour ago Up About an hour (healthy) manager-netbox-1 2025-05-14 00:05:43.761373 | orchestrator | 6c5b6516c833 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" About an hour ago Up About an hour (healthy) manager-beat-1 2025-05-14 00:05:43.761393 | orchestrator | 6ee601e16e39 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" About an hour ago Up About an hour (healthy) manager-flower-1 2025-05-14 00:05:43.761445 | orchestrator | cc2e2b63b3d1 registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" About an hour ago Up About an hour (healthy) osismclient 2025-05-14 00:05:43.761462 | orchestrator | 45cd835d6eff registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" About an hour ago Up About an hour (healthy) manager-listener-1 2025-05-14 00:05:43.761481 | orchestrator | 1c1417912518 registry.osism.tech/dockerhub/library/redis:7.4.3-alpine "docker-entrypoint.s…" About an hour ago Up About an hour (healthy) 6379/tcp manager-redis-1 2025-05-14 00:05:43.761500 | orchestrator | 11a4ffaea834 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" About an hour ago Up About an hour (healthy) manager-watchdog-1 2025-05-14 00:05:43.761519 | orchestrator | 2137806f2145 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" About an hour ago Up About an hour (healthy) manager-conductor-1 2025-05-14 00:05:43.761537 | orchestrator | 51b4419f2f6d registry.osism.tech/dockerhub/library/mariadb:11.7.2 "docker-entrypoint.s…" About an hour ago Up About an hour (healthy) 3306/tcp manager-mariadb-1 2025-05-14 00:05:43.761555 | orchestrator | 85f01d6d6f91 registry.osism.tech/osism/netbox:v4.2.2 "/opt/netbox/venv/bi…" About an hour ago Up About an hour (healthy) netbox-netbox-worker-1 2025-05-14 00:05:43.761581 | orchestrator | 36f8e3378b90 registry.osism.tech/osism/netbox:v4.2.2 "/usr/bin/tini -- /o…" About an hour ago Up About an hour (healthy) netbox-netbox-1 2025-05-14 00:05:43.761613 | orchestrator | 1ee80083eb02 registry.osism.tech/dockerhub/library/redis:7.4.3-alpine "docker-entrypoint.s…" About an hour ago Up About an hour (healthy) 6379/tcp netbox-redis-1 2025-05-14 00:05:43.761631 | orchestrator | 66d2e1fa69d9 registry.osism.tech/dockerhub/library/postgres:16.9-alpine "docker-entrypoint.s…" About an hour ago Up About an hour (healthy) 5432/tcp netbox-postgres-1 2025-05-14 00:05:43.761648 | orchestrator | ef46b7d14fd2 registry.osism.tech/dockerhub/library/traefik:v3.4.0 "/entrypoint.sh trae…" About an hour ago Up About an hour (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2025-05-14 00:05:43.935154 | orchestrator | 2025-05-14 00:05:43.935284 | orchestrator | ## Images @ testbed-manager 2025-05-14 00:05:43.935316 | orchestrator | 2025-05-14 00:05:43.935336 | orchestrator | + echo 2025-05-14 00:05:43.935357 | orchestrator | + echo '## Images @ testbed-manager' 2025-05-14 00:05:43.935379 | orchestrator | + echo 2025-05-14 00:05:43.935399 | orchestrator | + osism container testbed-manager images 2025-05-14 00:05:45.967901 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-05-14 00:05:45.968042 | orchestrator | registry.osism.tech/osism/osism latest 41343425af04 4 hours ago 339MB 2025-05-14 00:05:45.968058 | orchestrator | registry.osism.tech/osism/osism-ansible latest 64090569d5ca 4 hours ago 555MB 2025-05-14 00:05:45.968069 | orchestrator | registry.osism.tech/osism/ceph-ansible reef 5c7511ea3d96 8 hours ago 536MB 2025-05-14 00:05:45.968078 | orchestrator | registry.osism.tech/osism/inventory-reconciler latest 2da5f45db2a6 9 hours ago 311MB 2025-05-14 00:05:45.968113 | orchestrator | registry.osism.tech/osism/osism-kubernetes latest 9a70cdf28c76 10 hours ago 1.2GB 2025-05-14 00:05:45.968122 | orchestrator | registry.osism.tech/osism/kolla-ansible 2024.2 76112e377453 14 hours ago 572MB 2025-05-14 00:05:45.968131 | orchestrator | registry.osism.tech/osism/homer v25.05.1 6846e50da1be 21 hours ago 11MB 2025-05-14 00:05:45.968140 | orchestrator | registry.osism.tech/osism/cephclient reef c21acc38590e 21 hours ago 453MB 2025-05-14 00:05:45.968149 | orchestrator | registry.osism.tech/dockerhub/library/postgres 16.9-alpine b56133b65cd3 5 days ago 275MB 2025-05-14 00:05:45.968157 | orchestrator | registry.osism.tech/kolla/cron 2024.2 1889be0eac08 6 days ago 318MB 2025-05-14 00:05:45.968166 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 58e55a1b66e3 6 days ago 746MB 2025-05-14 00:05:45.968174 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 5dd5c89951f8 6 days ago 626MB 2025-05-14 00:05:45.968183 | orchestrator | registry.osism.tech/kolla/prometheus-blackbox-exporter 2024.2 3b8b9ff5984d 6 days ago 360MB 2025-05-14 00:05:45.968192 | orchestrator | registry.osism.tech/kolla/prometheus-alertmanager 2024.2 bf00029ac6b4 6 days ago 456MB 2025-05-14 00:05:45.968201 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 1b41fe8ac6d5 6 days ago 410MB 2025-05-14 00:05:45.968209 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 8dc226730d91 6 days ago 358MB 2025-05-14 00:05:45.968218 | orchestrator | registry.osism.tech/kolla/prometheus-v2-server 2024.2 cb92564b44ae 6 days ago 891MB 2025-05-14 00:05:45.968227 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.4.0 79e66182ffbe 8 days ago 224MB 2025-05-14 00:05:45.968235 | orchestrator | registry.osism.tech/dockerhub/hashicorp/vault 1.19.3 272792d172e0 2 weeks ago 504MB 2025-05-14 00:05:45.968244 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.3-alpine 9a07b03a1871 2 weeks ago 41.4MB 2025-05-14 00:05:45.968253 | orchestrator | registry.osism.tech/osism/netbox v4.2.2 de0f89b61971 6 weeks ago 817MB 2025-05-14 00:05:45.968262 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.7.2 4815a3e162ea 2 months ago 328MB 2025-05-14 00:05:45.968271 | orchestrator | phpmyadmin/phpmyadmin 5.2 0276a66ce322 3 months ago 571MB 2025-05-14 00:05:45.968280 | orchestrator | registry.osism.tech/osism/ara-server 1.7.2 bb44122eb176 8 months ago 300MB 2025-05-14 00:05:45.968289 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 11 months ago 146MB 2025-05-14 00:05:46.224859 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-05-14 00:05:46.225218 | orchestrator | ++ semver latest 5.0.0 2025-05-14 00:05:46.275205 | orchestrator | 2025-05-14 00:05:46.275316 | orchestrator | ## Containers @ testbed-node-0 2025-05-14 00:05:46.275333 | orchestrator | 2025-05-14 00:05:46.275345 | orchestrator | + [[ -1 -eq -1 ]] 2025-05-14 00:05:46.275357 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-05-14 00:05:46.275369 | orchestrator | + echo 2025-05-14 00:05:46.275410 | orchestrator | + echo '## Containers @ testbed-node-0' 2025-05-14 00:05:46.275427 | orchestrator | + echo 2025-05-14 00:05:46.275439 | orchestrator | + osism container testbed-node-0 ps 2025-05-14 00:05:48.404773 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-05-14 00:05:48.404893 | orchestrator | af0a17ae3d35 registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-05-14 00:05:48.404970 | orchestrator | 2a4ff178ebf5 registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-05-14 00:05:48.405012 | orchestrator | 3c8bbce1b036 registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-05-14 00:05:48.405039 | orchestrator | d56a0d13dd43 registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes octavia_driver_agent 2025-05-14 00:05:48.405051 | orchestrator | b91786b6a960 registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2025-05-14 00:05:48.405073 | orchestrator | 391b4e9f10e1 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_conductor 2025-05-14 00:05:48.405084 | orchestrator | d0f7ed1ee828 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_api 2025-05-14 00:05:48.405099 | orchestrator | b2b10a61a291 registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2025-05-14 00:05:48.405110 | orchestrator | 833d7c41e191 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2025-05-14 00:05:48.405121 | orchestrator | 65efe12e7952 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_worker 2025-05-14 00:05:48.405132 | orchestrator | 422d5cff75a1 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_mdns 2025-05-14 00:05:48.405143 | orchestrator | 7ada68517f02 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_producer 2025-05-14 00:05:48.405154 | orchestrator | 5789e9732c10 registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_novncproxy 2025-05-14 00:05:48.405165 | orchestrator | 850e2a25fc0d registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_central 2025-05-14 00:05:48.405176 | orchestrator | 9ee0fa3b3acd registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_api 2025-05-14 00:05:48.405187 | orchestrator | 1c0ed6167796 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 10 minutes ago Up 9 minutes (healthy) nova_conductor 2025-05-14 00:05:48.405198 | orchestrator | e9a4952cfaa6 registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2025-05-14 00:05:48.405209 | orchestrator | 47aa1d3b01e8 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) neutron_server 2025-05-14 00:05:48.405237 | orchestrator | 5302ce6d9fef registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_worker 2025-05-14 00:05:48.405248 | orchestrator | 99f61b8697cb registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_keystone_listener 2025-05-14 00:05:48.405265 | orchestrator | d91c52952bcd registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) nova_api 2025-05-14 00:05:48.405325 | orchestrator | 2cd97ec47303 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_api 2025-05-14 00:05:48.405338 | orchestrator | bd9689edd40e registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 13 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-05-14 00:05:48.405350 | orchestrator | 2d9c3e77af6a registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) glance_api 2025-05-14 00:05:48.405361 | orchestrator | e4e05ed0c0ae registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_scheduler 2025-05-14 00:05:48.405372 | orchestrator | fdd244db528d registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) cinder_api 2025-05-14 00:05:48.405383 | orchestrator | 0b9ee673b76f registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_elasticsearch_exporter 2025-05-14 00:05:48.405395 | orchestrator | edc4a9b08492 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_cadvisor 2025-05-14 00:05:48.405406 | orchestrator | 056471f135e4 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_memcached_exporter 2025-05-14 00:05:48.405417 | orchestrator | 409e6b053eca registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_mysqld_exporter 2025-05-14 00:05:48.405428 | orchestrator | e4d140f49eaf registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_node_exporter 2025-05-14 00:05:48.405439 | orchestrator | 27e739e0b111 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 17 minutes ago Up 17 minutes ceph-mgr-testbed-node-0 2025-05-14 00:05:48.405450 | orchestrator | e2900c674245 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone 2025-05-14 00:05:48.405461 | orchestrator | 66d1e9884d92 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_fernet 2025-05-14 00:05:48.405472 | orchestrator | d39d9d4c9d5e registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_ssh 2025-05-14 00:05:48.405483 | orchestrator | 4ec95323f34e registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) horizon 2025-05-14 00:05:48.405494 | orchestrator | fc90fbf5d0ae registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 20 minutes ago Up 20 minutes (healthy) mariadb 2025-05-14 00:05:48.405505 | orchestrator | 0aca8cfd189a registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch_dashboards 2025-05-14 00:05:48.405516 | orchestrator | 8c519b3546e8 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) opensearch 2025-05-14 00:05:48.405550 | orchestrator | 81dfcbdac5ad registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 24 minutes ago Up 24 minutes ceph-crash-testbed-node-0 2025-05-14 00:05:48.405563 | orchestrator | 6da64ee8f309 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes keepalived 2025-05-14 00:05:48.405582 | orchestrator | 71dbbc1be513 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) proxysql 2025-05-14 00:05:48.405593 | orchestrator | 4ede32ef8404 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) haproxy 2025-05-14 00:05:48.405604 | orchestrator | fb64e9d9753c registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_northd 2025-05-14 00:05:48.405639 | orchestrator | 01a4e31fbb04 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_sb_db 2025-05-14 00:05:48.405652 | orchestrator | 2501e9a142d5 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_nb_db 2025-05-14 00:05:48.405663 | orchestrator | 71aacb910b87 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_controller 2025-05-14 00:05:48.405674 | orchestrator | 0231d90da1c6 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 28 minutes ago Up 28 minutes ceph-mon-testbed-node-0 2025-05-14 00:05:48.405684 | orchestrator | 60c68b6a2e9a registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) rabbitmq 2025-05-14 00:05:48.405695 | orchestrator | 5c03ef9c8f92 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 30 minutes ago Up 29 minutes (healthy) openvswitch_vswitchd 2025-05-14 00:05:48.405706 | orchestrator | 1946657567cf registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_db 2025-05-14 00:05:48.405717 | orchestrator | 91d751efdc64 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis_sentinel 2025-05-14 00:05:48.405728 | orchestrator | 74438d16b1f6 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis 2025-05-14 00:05:48.405739 | orchestrator | ab9ac2127e8d registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) memcached 2025-05-14 00:05:48.405750 | orchestrator | 0b4a88b1a04e registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes cron 2025-05-14 00:05:48.405761 | orchestrator | 6186b202ef4a registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes kolla_toolbox 2025-05-14 00:05:48.405772 | orchestrator | 1277686f18b4 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 32 minutes ago Up 32 minutes fluentd 2025-05-14 00:05:48.659212 | orchestrator | 2025-05-14 00:05:48.659332 | orchestrator | ## Images @ testbed-node-0 2025-05-14 00:05:48.659349 | orchestrator | 2025-05-14 00:05:48.659362 | orchestrator | + echo 2025-05-14 00:05:48.659374 | orchestrator | + echo '## Images @ testbed-node-0' 2025-05-14 00:05:48.659389 | orchestrator | + echo 2025-05-14 00:05:48.659400 | orchestrator | + osism container testbed-node-0 images 2025-05-14 00:05:50.776310 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-05-14 00:05:50.776409 | orchestrator | registry.osism.tech/osism/ceph-daemon reef a6eecfeabe79 21 hours ago 1.27GB 2025-05-14 00:05:50.776417 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 04fc7376c64c 6 days ago 375MB 2025-05-14 00:05:50.776457 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 67fa0a55bc5e 6 days ago 1.59GB 2025-05-14 00:05:50.776464 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 f2651c58df80 6 days ago 1.55GB 2025-05-14 00:05:50.776470 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 4cdd10b90f5a 6 days ago 1.01GB 2025-05-14 00:05:50.776476 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 50d58f1f6e4e 6 days ago 326MB 2025-05-14 00:05:50.776481 | orchestrator | registry.osism.tech/kolla/cron 2024.2 1889be0eac08 6 days ago 318MB 2025-05-14 00:05:50.776487 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 ae7fe18eaf3e 6 days ago 329MB 2025-05-14 00:05:50.776492 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 2541622ae785 6 days ago 417MB 2025-05-14 00:05:50.776498 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 58e55a1b66e3 6 days ago 746MB 2025-05-14 00:05:50.776860 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 c143bd7f4121 6 days ago 318MB 2025-05-14 00:05:50.776868 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 5dd5c89951f8 6 days ago 626MB 2025-05-14 00:05:50.776873 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 340739858985 6 days ago 590MB 2025-05-14 00:05:50.776877 | orchestrator | registry.osism.tech/kolla/redis 2024.2 00384dafd051 6 days ago 324MB 2025-05-14 00:05:50.776881 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 45c0ed11fefe 6 days ago 324MB 2025-05-14 00:05:50.776885 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 3e2b688ee000 6 days ago 361MB 2025-05-14 00:05:50.776889 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 62d56b6fac4e 6 days ago 361MB 2025-05-14 00:05:50.776893 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 d7167bf51937 6 days ago 344MB 2025-05-14 00:05:50.776897 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 4972b33b6697 6 days ago 351MB 2025-05-14 00:05:50.776901 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 1b41fe8ac6d5 6 days ago 410MB 2025-05-14 00:05:50.776908 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 8dc226730d91 6 days ago 358MB 2025-05-14 00:05:50.776915 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 74d6e103330c 6 days ago 353MB 2025-05-14 00:05:50.776943 | orchestrator | registry.osism.tech/kolla/aodh-api 2024.2 67d8a8d94f28 6 days ago 1.04GB 2025-05-14 00:05:50.776950 | orchestrator | registry.osism.tech/kolla/aodh-notifier 2024.2 954d23827c32 6 days ago 1.04GB 2025-05-14 00:05:50.776957 | orchestrator | registry.osism.tech/kolla/aodh-listener 2024.2 12ca4ba36866 6 days ago 1.04GB 2025-05-14 00:05:50.776964 | orchestrator | registry.osism.tech/kolla/aodh-evaluator 2024.2 d5dd5b6fe0a1 6 days ago 1.04GB 2025-05-14 00:05:50.776971 | orchestrator | registry.osism.tech/kolla/ceilometer-central 2024.2 51ef1cabd60d 6 days ago 1.04GB 2025-05-14 00:05:50.776977 | orchestrator | registry.osism.tech/kolla/ceilometer-notification 2024.2 50b1dc1a5592 6 days ago 1.04GB 2025-05-14 00:05:50.776984 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 268d65c18d83 6 days ago 1.13GB 2025-05-14 00:05:50.776991 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 a550ee2c1fb2 6 days ago 1.11GB 2025-05-14 00:05:50.777004 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 271202743813 6 days ago 1.11GB 2025-05-14 00:05:50.777011 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 1cbf127747d4 6 days ago 1.15GB 2025-05-14 00:05:50.777024 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 ad86766891c6 6 days ago 1.06GB 2025-05-14 00:05:50.777031 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 da249321181d 6 days ago 1.06GB 2025-05-14 00:05:50.777037 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 f8c92b9f65e4 6 days ago 1.06GB 2025-05-14 00:05:50.777044 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 227c0b84f8a2 6 days ago 1.41GB 2025-05-14 00:05:50.777050 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 8635e59a338d 6 days ago 1.41GB 2025-05-14 00:05:50.777057 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 6e8318f9146d 6 days ago 1.04GB 2025-05-14 00:05:50.777064 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 d112d35cb4cc 6 days ago 1.05GB 2025-05-14 00:05:50.777070 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 12eb62b255c1 6 days ago 1.05GB 2025-05-14 00:05:50.777078 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 d5ed39be7469 6 days ago 1.06GB 2025-05-14 00:05:50.777084 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 a636aa737c69 6 days ago 1.05GB 2025-05-14 00:05:50.777091 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 1b750e4a57a6 6 days ago 1.05GB 2025-05-14 00:05:50.777098 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 e643924bd3df 6 days ago 1.06GB 2025-05-14 00:05:50.777104 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 627530339ea2 6 days ago 1.42GB 2025-05-14 00:05:50.777111 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 1693a9681618 6 days ago 1.29GB 2025-05-14 00:05:50.777117 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 13f6d887f84c 6 days ago 1.29GB 2025-05-14 00:05:50.777121 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 229d0afc6727 6 days ago 1.29GB 2025-05-14 00:05:50.777132 | orchestrator | registry.osism.tech/kolla/skyline-apiserver 2024.2 41f5975572eb 6 days ago 1.11GB 2025-05-14 00:05:50.777137 | orchestrator | registry.osism.tech/kolla/skyline-console 2024.2 ac5f63def63f 6 days ago 1.11GB 2025-05-14 00:05:50.777143 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 85d71337ad49 6 days ago 1.1GB 2025-05-14 00:05:50.777150 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 90c7cfd6b9f1 6 days ago 1.12GB 2025-05-14 00:05:50.777157 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 a050c19ba280 6 days ago 1.1GB 2025-05-14 00:05:50.777164 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 eea4b2b0f79c 6 days ago 1.1GB 2025-05-14 00:05:50.777170 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 67f9c52616ca 6 days ago 1.12GB 2025-05-14 00:05:50.777178 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 93b5d082cb86 6 days ago 1.31GB 2025-05-14 00:05:50.777185 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 93300b4fa890 6 days ago 1.19GB 2025-05-14 00:05:50.777192 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 9125e5efb56e 6 days ago 947MB 2025-05-14 00:05:50.777199 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 9cb6a4feaa4c 6 days ago 946MB 2025-05-14 00:05:50.777206 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 ca70d4f12a66 6 days ago 947MB 2025-05-14 00:05:50.777212 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 ca1be25de8b6 6 days ago 946MB 2025-05-14 00:05:50.777222 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 60f89630a675 7 days ago 1.21GB 2025-05-14 00:05:50.777235 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 91a2a6c5d8a0 7 days ago 1.24GB 2025-05-14 00:05:51.023578 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-05-14 00:05:51.023948 | orchestrator | ++ semver latest 5.0.0 2025-05-14 00:05:51.071548 | orchestrator | 2025-05-14 00:05:51.071655 | orchestrator | ## Containers @ testbed-node-1 2025-05-14 00:05:51.071670 | orchestrator | 2025-05-14 00:05:51.071681 | orchestrator | + [[ -1 -eq -1 ]] 2025-05-14 00:05:51.071692 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-05-14 00:05:51.071702 | orchestrator | + echo 2025-05-14 00:05:51.071712 | orchestrator | + echo '## Containers @ testbed-node-1' 2025-05-14 00:05:51.071725 | orchestrator | + echo 2025-05-14 00:05:51.071735 | orchestrator | + osism container testbed-node-1 ps 2025-05-14 00:05:53.176582 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-05-14 00:05:53.176685 | orchestrator | e1a2d60f11ca registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-05-14 00:05:53.176701 | orchestrator | 0905f62c9783 registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-05-14 00:05:53.176713 | orchestrator | 9f4fb5afcd3c registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 5 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-05-14 00:05:53.176724 | orchestrator | 01aae04b9aaf registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes octavia_driver_agent 2025-05-14 00:05:53.176735 | orchestrator | 8ed2fe9b5959 registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2025-05-14 00:05:53.176746 | orchestrator | 650d3e8ca2f1 registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2025-05-14 00:05:53.176757 | orchestrator | 5cf0f44ef7c7 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_conductor 2025-05-14 00:05:53.176773 | orchestrator | c6775d18b1e4 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_api 2025-05-14 00:05:53.176784 | orchestrator | d856fb6041da registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2025-05-14 00:05:53.176796 | orchestrator | 9c9397a3bc7a registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_worker 2025-05-14 00:05:53.176807 | orchestrator | 81636d899386 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_mdns 2025-05-14 00:05:53.176818 | orchestrator | 400cb3bc480f registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_novncproxy 2025-05-14 00:05:53.176829 | orchestrator | acd63c31a31f registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_producer 2025-05-14 00:05:53.176841 | orchestrator | 7d08794c3073 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_central 2025-05-14 00:05:53.176851 | orchestrator | 24559d8973f7 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_api 2025-05-14 00:05:53.176884 | orchestrator | e532859179c1 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 10 minutes ago Up 9 minutes (healthy) nova_conductor 2025-05-14 00:05:53.176896 | orchestrator | 3753eff37e8e registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 11 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2025-05-14 00:05:53.177017 | orchestrator | 5d75d21c2f60 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) neutron_server 2025-05-14 00:05:53.177041 | orchestrator | 35d1f6e9e371 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_worker 2025-05-14 00:05:53.177061 | orchestrator | 68445ecf1ade registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_keystone_listener 2025-05-14 00:05:53.177074 | orchestrator | bc07b4c96e55 registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) nova_api 2025-05-14 00:05:53.177104 | orchestrator | ceb5aa829ca8 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_api 2025-05-14 00:05:53.177117 | orchestrator | b7312d3fa25c registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 12 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-05-14 00:05:53.177130 | orchestrator | 2d2c08bf965b registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) glance_api 2025-05-14 00:05:53.177142 | orchestrator | c8350442f691 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_scheduler 2025-05-14 00:05:53.177154 | orchestrator | 9dc7c5eafd4b registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_api 2025-05-14 00:05:53.177166 | orchestrator | c2c87acb2a38 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_elasticsearch_exporter 2025-05-14 00:05:53.177179 | orchestrator | 60fab7a77caf registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_cadvisor 2025-05-14 00:05:53.177192 | orchestrator | 8a836b223d06 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_memcached_exporter 2025-05-14 00:05:53.177204 | orchestrator | ea530f188b4b registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_mysqld_exporter 2025-05-14 00:05:53.177217 | orchestrator | fcda7fad5e28 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_node_exporter 2025-05-14 00:05:53.177229 | orchestrator | 43b00e0f53a8 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 17 minutes ago Up 17 minutes ceph-mgr-testbed-node-1 2025-05-14 00:05:53.177242 | orchestrator | 4e2a8d53e392 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone 2025-05-14 00:05:53.177255 | orchestrator | 4ab9e23ff408 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_fernet 2025-05-14 00:05:53.177267 | orchestrator | c0cb1d374124 registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_ssh 2025-05-14 00:05:53.177288 | orchestrator | c618a1384ec1 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) horizon 2025-05-14 00:05:53.177300 | orchestrator | 3aa6d3950514 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch_dashboards 2025-05-14 00:05:53.177313 | orchestrator | d3682a0c2756 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 22 minutes ago Up 22 minutes (healthy) mariadb 2025-05-14 00:05:53.177325 | orchestrator | 861a435625f0 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch 2025-05-14 00:05:53.177338 | orchestrator | 38c593f1d29b registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 24 minutes ago Up 24 minutes ceph-crash-testbed-node-1 2025-05-14 00:05:53.177351 | orchestrator | b6a88e53fc9f registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes keepalived 2025-05-14 00:05:53.177370 | orchestrator | efe37a7abb82 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) proxysql 2025-05-14 00:05:53.177384 | orchestrator | fb77b7280828 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) haproxy 2025-05-14 00:05:53.177396 | orchestrator | 884e00edde27 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_northd 2025-05-14 00:05:53.177416 | orchestrator | 321ce8eb192b registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_sb_db 2025-05-14 00:05:53.177429 | orchestrator | 89676a83c193 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_nb_db 2025-05-14 00:05:53.177439 | orchestrator | 97e5d6b442e8 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_controller 2025-05-14 00:05:53.177450 | orchestrator | 2be57d157610 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) rabbitmq 2025-05-14 00:05:53.177461 | orchestrator | 87c19fce26f1 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 28 minutes ago Up 28 minutes ceph-mon-testbed-node-1 2025-05-14 00:05:53.177472 | orchestrator | 2562bfcdf234 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 30 minutes ago Up 29 minutes (healthy) openvswitch_vswitchd 2025-05-14 00:05:53.177483 | orchestrator | 55cf1494295b registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_db 2025-05-14 00:05:53.177493 | orchestrator | d15c00eca8bc registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis_sentinel 2025-05-14 00:05:53.177504 | orchestrator | 7db9609683f0 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis 2025-05-14 00:05:53.177515 | orchestrator | 10e0cbbec156 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) memcached 2025-05-14 00:05:53.177532 | orchestrator | 0d4be77d8f8a registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes cron 2025-05-14 00:05:53.177543 | orchestrator | bf513363038d registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes kolla_toolbox 2025-05-14 00:05:53.177554 | orchestrator | f9344f4c09fe registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 32 minutes ago Up 32 minutes fluentd 2025-05-14 00:05:53.437055 | orchestrator | 2025-05-14 00:05:53.437156 | orchestrator | ## Images @ testbed-node-1 2025-05-14 00:05:53.437172 | orchestrator | 2025-05-14 00:05:53.437184 | orchestrator | + echo 2025-05-14 00:05:53.437196 | orchestrator | + echo '## Images @ testbed-node-1' 2025-05-14 00:05:53.437210 | orchestrator | + echo 2025-05-14 00:05:53.437221 | orchestrator | + osism container testbed-node-1 images 2025-05-14 00:05:55.547794 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-05-14 00:05:55.547988 | orchestrator | registry.osism.tech/osism/ceph-daemon reef a6eecfeabe79 21 hours ago 1.27GB 2025-05-14 00:05:55.548014 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 04fc7376c64c 6 days ago 375MB 2025-05-14 00:05:55.548032 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 67fa0a55bc5e 6 days ago 1.59GB 2025-05-14 00:05:55.548048 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 f2651c58df80 6 days ago 1.55GB 2025-05-14 00:05:55.548065 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 4cdd10b90f5a 6 days ago 1.01GB 2025-05-14 00:05:55.548081 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 50d58f1f6e4e 6 days ago 326MB 2025-05-14 00:05:55.548098 | orchestrator | registry.osism.tech/kolla/cron 2024.2 1889be0eac08 6 days ago 318MB 2025-05-14 00:05:55.548115 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 ae7fe18eaf3e 6 days ago 329MB 2025-05-14 00:05:55.548132 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 2541622ae785 6 days ago 417MB 2025-05-14 00:05:55.548148 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 58e55a1b66e3 6 days ago 746MB 2025-05-14 00:05:55.548165 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 c143bd7f4121 6 days ago 318MB 2025-05-14 00:05:55.548181 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 5dd5c89951f8 6 days ago 626MB 2025-05-14 00:05:55.548197 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 340739858985 6 days ago 590MB 2025-05-14 00:05:55.548214 | orchestrator | registry.osism.tech/kolla/redis 2024.2 00384dafd051 6 days ago 324MB 2025-05-14 00:05:55.548231 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 45c0ed11fefe 6 days ago 324MB 2025-05-14 00:05:55.548247 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 3e2b688ee000 6 days ago 361MB 2025-05-14 00:05:55.548262 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 62d56b6fac4e 6 days ago 361MB 2025-05-14 00:05:55.548301 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 4972b33b6697 6 days ago 351MB 2025-05-14 00:05:55.548319 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 d7167bf51937 6 days ago 344MB 2025-05-14 00:05:55.548336 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 1b41fe8ac6d5 6 days ago 410MB 2025-05-14 00:05:55.548353 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 8dc226730d91 6 days ago 358MB 2025-05-14 00:05:55.548369 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 74d6e103330c 6 days ago 353MB 2025-05-14 00:05:55.548416 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 268d65c18d83 6 days ago 1.13GB 2025-05-14 00:05:55.548434 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 a550ee2c1fb2 6 days ago 1.11GB 2025-05-14 00:05:55.548451 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 271202743813 6 days ago 1.11GB 2025-05-14 00:05:55.548467 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 1cbf127747d4 6 days ago 1.15GB 2025-05-14 00:05:55.548484 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 ad86766891c6 6 days ago 1.06GB 2025-05-14 00:05:55.548500 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 da249321181d 6 days ago 1.06GB 2025-05-14 00:05:55.548517 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 f8c92b9f65e4 6 days ago 1.06GB 2025-05-14 00:05:55.548534 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 227c0b84f8a2 6 days ago 1.41GB 2025-05-14 00:05:55.548550 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 8635e59a338d 6 days ago 1.41GB 2025-05-14 00:05:55.548568 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 6e8318f9146d 6 days ago 1.04GB 2025-05-14 00:05:55.548585 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 d112d35cb4cc 6 days ago 1.05GB 2025-05-14 00:05:55.548602 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 12eb62b255c1 6 days ago 1.05GB 2025-05-14 00:05:55.548619 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 d5ed39be7469 6 days ago 1.06GB 2025-05-14 00:05:55.548636 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 a636aa737c69 6 days ago 1.05GB 2025-05-14 00:05:55.548674 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 1b750e4a57a6 6 days ago 1.05GB 2025-05-14 00:05:55.548691 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 e643924bd3df 6 days ago 1.06GB 2025-05-14 00:05:55.548708 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 627530339ea2 6 days ago 1.42GB 2025-05-14 00:05:55.548724 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 1693a9681618 6 days ago 1.29GB 2025-05-14 00:05:55.548740 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 13f6d887f84c 6 days ago 1.29GB 2025-05-14 00:05:55.548757 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 229d0afc6727 6 days ago 1.29GB 2025-05-14 00:05:55.548774 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 85d71337ad49 6 days ago 1.1GB 2025-05-14 00:05:55.548790 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 90c7cfd6b9f1 6 days ago 1.12GB 2025-05-14 00:05:55.548807 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 a050c19ba280 6 days ago 1.1GB 2025-05-14 00:05:55.548822 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 eea4b2b0f79c 6 days ago 1.1GB 2025-05-14 00:05:55.548839 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 67f9c52616ca 6 days ago 1.12GB 2025-05-14 00:05:55.548855 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 93b5d082cb86 6 days ago 1.31GB 2025-05-14 00:05:55.548872 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 93300b4fa890 6 days ago 1.19GB 2025-05-14 00:05:55.548889 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 9125e5efb56e 6 days ago 947MB 2025-05-14 00:05:55.548905 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 9cb6a4feaa4c 6 days ago 946MB 2025-05-14 00:05:55.548943 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 ca70d4f12a66 6 days ago 947MB 2025-05-14 00:05:55.548970 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 ca1be25de8b6 6 days ago 946MB 2025-05-14 00:05:55.548987 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 60f89630a675 7 days ago 1.21GB 2025-05-14 00:05:55.549004 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 91a2a6c5d8a0 7 days ago 1.24GB 2025-05-14 00:05:55.798542 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-05-14 00:05:55.798781 | orchestrator | ++ semver latest 5.0.0 2025-05-14 00:05:55.837189 | orchestrator | 2025-05-14 00:05:55.837288 | orchestrator | ## Containers @ testbed-node-2 2025-05-14 00:05:55.837303 | orchestrator | 2025-05-14 00:05:55.837315 | orchestrator | + [[ -1 -eq -1 ]] 2025-05-14 00:05:55.837326 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-05-14 00:05:55.837337 | orchestrator | + echo 2025-05-14 00:05:55.837349 | orchestrator | + echo '## Containers @ testbed-node-2' 2025-05-14 00:05:55.837362 | orchestrator | + echo 2025-05-14 00:05:55.837373 | orchestrator | + osism container testbed-node-2 ps 2025-05-14 00:05:57.957124 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-05-14 00:05:57.957224 | orchestrator | 8864fad13598 registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-05-14 00:05:57.957238 | orchestrator | 9e261add8946 registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-05-14 00:05:57.957249 | orchestrator | 6ae4e9cf777e registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_health_manager 2025-05-14 00:05:57.957259 | orchestrator | 813d345105de registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes octavia_driver_agent 2025-05-14 00:05:57.957269 | orchestrator | ccbca5ef57c1 registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2025-05-14 00:05:57.957298 | orchestrator | c4aa5785e56f registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2025-05-14 00:05:57.957308 | orchestrator | e673839fea41 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_conductor 2025-05-14 00:05:57.957318 | orchestrator | 708a594e6c39 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_api 2025-05-14 00:05:57.957328 | orchestrator | 34bdecf64fa9 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2025-05-14 00:05:57.957337 | orchestrator | 416a88e125ee registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_worker 2025-05-14 00:05:57.957346 | orchestrator | 6dc834b9896a registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_mdns 2025-05-14 00:05:57.957356 | orchestrator | c4820fc14b50 registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_novncproxy 2025-05-14 00:05:57.957366 | orchestrator | 345c96d9883a registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_producer 2025-05-14 00:05:57.957375 | orchestrator | 6bdbb8080c62 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_central 2025-05-14 00:05:57.957413 | orchestrator | b8d82c8c00ef registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_api 2025-05-14 00:05:57.957423 | orchestrator | a6f2eeb50725 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 10 minutes ago Up 9 minutes (healthy) nova_conductor 2025-05-14 00:05:57.957434 | orchestrator | 93d80bb407c8 registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_backend_bind9 2025-05-14 00:05:57.957443 | orchestrator | 62f10c5ccd28 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) neutron_server 2025-05-14 00:05:57.957453 | orchestrator | df6f6bdc8b34 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_worker 2025-05-14 00:05:57.957462 | orchestrator | 7e9f9e5adaaf registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_keystone_listener 2025-05-14 00:05:57.957472 | orchestrator | e52f093018cd registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) nova_api 2025-05-14 00:05:57.957497 | orchestrator | 3a4d2aaf12e0 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_api 2025-05-14 00:05:57.957508 | orchestrator | bb98f2f47f32 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 12 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-05-14 00:05:57.957518 | orchestrator | ac750812c97c registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) glance_api 2025-05-14 00:05:57.957527 | orchestrator | ff4c7ce55a64 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_scheduler 2025-05-14 00:05:57.957537 | orchestrator | 47d5ca69b11b registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_api 2025-05-14 00:05:57.957546 | orchestrator | 941e404bebd4 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_elasticsearch_exporter 2025-05-14 00:05:57.957557 | orchestrator | a059160c6f1c registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_cadvisor 2025-05-14 00:05:57.957566 | orchestrator | feffeb76ccfb registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_memcached_exporter 2025-05-14 00:05:57.957576 | orchestrator | edf0957a1b7f registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_mysqld_exporter 2025-05-14 00:05:57.957586 | orchestrator | 7debe2a49b34 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_node_exporter 2025-05-14 00:05:57.957595 | orchestrator | 1e00bf88569f registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 17 minutes ago Up 17 minutes ceph-mgr-testbed-node-2 2025-05-14 00:05:57.957604 | orchestrator | e7e8e0377597 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone 2025-05-14 00:05:57.957620 | orchestrator | 88efbc79a8f5 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_fernet 2025-05-14 00:05:57.957630 | orchestrator | 3daaec2b2a7f registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_ssh 2025-05-14 00:05:57.957639 | orchestrator | 341aabc8667d registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) horizon 2025-05-14 00:05:57.957650 | orchestrator | 2657bbfa6efd registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 21 minutes ago Up 21 minutes (healthy) mariadb 2025-05-14 00:05:57.957663 | orchestrator | bc23b25821d0 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch_dashboards 2025-05-14 00:05:57.957673 | orchestrator | 850f9b9336d8 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch 2025-05-14 00:05:57.957690 | orchestrator | 7a0cccbed198 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 24 minutes ago Up 24 minutes ceph-crash-testbed-node-2 2025-05-14 00:05:57.957701 | orchestrator | 43cad0a3f11e registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes keepalived 2025-05-14 00:05:57.957713 | orchestrator | e89c3c691f3b registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) proxysql 2025-05-14 00:05:57.957725 | orchestrator | 3584acea654a registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) haproxy 2025-05-14 00:05:57.957736 | orchestrator | 25656ef3230d registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_northd 2025-05-14 00:05:57.957755 | orchestrator | b7b01aecabf9 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_sb_db 2025-05-14 00:05:57.957766 | orchestrator | cb5b609baec8 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_nb_db 2025-05-14 00:05:57.957777 | orchestrator | 278e8fed3eb0 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_controller 2025-05-14 00:05:57.957788 | orchestrator | 6fba3352c86b registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) rabbitmq 2025-05-14 00:05:57.957799 | orchestrator | 05e8ffeb3d71 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 29 minutes ago Up 29 minutes ceph-mon-testbed-node-2 2025-05-14 00:05:57.957811 | orchestrator | fe67b31d9d09 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 30 minutes ago Up 29 minutes (healthy) openvswitch_vswitchd 2025-05-14 00:05:57.957822 | orchestrator | 96cb2ec280f3 registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_db 2025-05-14 00:05:57.957833 | orchestrator | 069a529f7627 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis_sentinel 2025-05-14 00:05:57.957844 | orchestrator | 6a14c8e63751 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis 2025-05-14 00:05:57.957861 | orchestrator | 145088b5214d registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) memcached 2025-05-14 00:05:57.957872 | orchestrator | 216016272b8c registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes cron 2025-05-14 00:05:57.957882 | orchestrator | 8472a8f7c309 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes kolla_toolbox 2025-05-14 00:05:57.957894 | orchestrator | fa73f0e01073 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 32 minutes ago Up 32 minutes fluentd 2025-05-14 00:05:58.220101 | orchestrator | 2025-05-14 00:05:58.220190 | orchestrator | ## Images @ testbed-node-2 2025-05-14 00:05:58.220204 | orchestrator | 2025-05-14 00:05:58.220258 | orchestrator | + echo 2025-05-14 00:05:58.220273 | orchestrator | + echo '## Images @ testbed-node-2' 2025-05-14 00:05:58.220311 | orchestrator | + echo 2025-05-14 00:05:58.220323 | orchestrator | + osism container testbed-node-2 images 2025-05-14 00:06:00.309800 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-05-14 00:06:00.309902 | orchestrator | registry.osism.tech/osism/ceph-daemon reef a6eecfeabe79 21 hours ago 1.27GB 2025-05-14 00:06:00.309981 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 04fc7376c64c 6 days ago 375MB 2025-05-14 00:06:00.309995 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 67fa0a55bc5e 6 days ago 1.59GB 2025-05-14 00:06:00.310006 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 f2651c58df80 6 days ago 1.55GB 2025-05-14 00:06:00.310077 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 4cdd10b90f5a 6 days ago 1.01GB 2025-05-14 00:06:00.310099 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 50d58f1f6e4e 6 days ago 326MB 2025-05-14 00:06:00.310118 | orchestrator | registry.osism.tech/kolla/cron 2024.2 1889be0eac08 6 days ago 318MB 2025-05-14 00:06:00.310137 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 ae7fe18eaf3e 6 days ago 329MB 2025-05-14 00:06:00.310151 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 2541622ae785 6 days ago 417MB 2025-05-14 00:06:00.310162 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 c143bd7f4121 6 days ago 318MB 2025-05-14 00:06:00.310173 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 58e55a1b66e3 6 days ago 746MB 2025-05-14 00:06:00.310184 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 5dd5c89951f8 6 days ago 626MB 2025-05-14 00:06:00.310194 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 340739858985 6 days ago 590MB 2025-05-14 00:06:00.310205 | orchestrator | registry.osism.tech/kolla/redis 2024.2 00384dafd051 6 days ago 324MB 2025-05-14 00:06:00.310215 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 45c0ed11fefe 6 days ago 324MB 2025-05-14 00:06:00.310226 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 3e2b688ee000 6 days ago 361MB 2025-05-14 00:06:00.310236 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 62d56b6fac4e 6 days ago 361MB 2025-05-14 00:06:00.310247 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 d7167bf51937 6 days ago 344MB 2025-05-14 00:06:00.310257 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 4972b33b6697 6 days ago 351MB 2025-05-14 00:06:00.310268 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 1b41fe8ac6d5 6 days ago 410MB 2025-05-14 00:06:00.310278 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 8dc226730d91 6 days ago 358MB 2025-05-14 00:06:00.310317 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 74d6e103330c 6 days ago 353MB 2025-05-14 00:06:00.310329 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 268d65c18d83 6 days ago 1.13GB 2025-05-14 00:06:00.310342 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 a550ee2c1fb2 6 days ago 1.11GB 2025-05-14 00:06:00.310355 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 271202743813 6 days ago 1.11GB 2025-05-14 00:06:00.310368 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 1cbf127747d4 6 days ago 1.15GB 2025-05-14 00:06:00.310381 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 ad86766891c6 6 days ago 1.06GB 2025-05-14 00:06:00.310394 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 da249321181d 6 days ago 1.06GB 2025-05-14 00:06:00.310407 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 f8c92b9f65e4 6 days ago 1.06GB 2025-05-14 00:06:00.310421 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 227c0b84f8a2 6 days ago 1.41GB 2025-05-14 00:06:00.310433 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 8635e59a338d 6 days ago 1.41GB 2025-05-14 00:06:00.310446 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 6e8318f9146d 6 days ago 1.04GB 2025-05-14 00:06:00.310459 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 d112d35cb4cc 6 days ago 1.05GB 2025-05-14 00:06:00.310489 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 12eb62b255c1 6 days ago 1.05GB 2025-05-14 00:06:00.310503 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 d5ed39be7469 6 days ago 1.06GB 2025-05-14 00:06:00.310517 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 a636aa737c69 6 days ago 1.05GB 2025-05-14 00:06:00.310548 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 1b750e4a57a6 6 days ago 1.05GB 2025-05-14 00:06:00.310561 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 e643924bd3df 6 days ago 1.06GB 2025-05-14 00:06:00.310574 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 627530339ea2 6 days ago 1.42GB 2025-05-14 00:06:00.310587 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 1693a9681618 6 days ago 1.29GB 2025-05-14 00:06:00.310599 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 13f6d887f84c 6 days ago 1.29GB 2025-05-14 00:06:00.310617 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 229d0afc6727 6 days ago 1.29GB 2025-05-14 00:06:00.310630 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 85d71337ad49 6 days ago 1.1GB 2025-05-14 00:06:00.310643 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 90c7cfd6b9f1 6 days ago 1.12GB 2025-05-14 00:06:00.310655 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 a050c19ba280 6 days ago 1.1GB 2025-05-14 00:06:00.310668 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 eea4b2b0f79c 6 days ago 1.1GB 2025-05-14 00:06:00.310682 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 67f9c52616ca 6 days ago 1.12GB 2025-05-14 00:06:00.310694 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 93b5d082cb86 6 days ago 1.31GB 2025-05-14 00:06:00.310705 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 93300b4fa890 6 days ago 1.19GB 2025-05-14 00:06:00.310715 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 9125e5efb56e 6 days ago 947MB 2025-05-14 00:06:00.310726 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 9cb6a4feaa4c 6 days ago 946MB 2025-05-14 00:06:00.310744 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 ca70d4f12a66 6 days ago 947MB 2025-05-14 00:06:00.310755 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 ca1be25de8b6 6 days ago 946MB 2025-05-14 00:06:00.310766 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 60f89630a675 7 days ago 1.21GB 2025-05-14 00:06:00.310777 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 91a2a6c5d8a0 7 days ago 1.24GB 2025-05-14 00:06:00.491490 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2025-05-14 00:06:00.499228 | orchestrator | + set -e 2025-05-14 00:06:00.499282 | orchestrator | + source /opt/manager-vars.sh 2025-05-14 00:06:00.500353 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-05-14 00:06:00.500394 | orchestrator | ++ NUMBER_OF_NODES=6 2025-05-14 00:06:00.500406 | orchestrator | ++ export CEPH_VERSION=reef 2025-05-14 00:06:00.500417 | orchestrator | ++ CEPH_VERSION=reef 2025-05-14 00:06:00.500429 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-05-14 00:06:00.500443 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-05-14 00:06:00.500455 | orchestrator | ++ export MANAGER_VERSION=latest 2025-05-14 00:06:00.500471 | orchestrator | ++ MANAGER_VERSION=latest 2025-05-14 00:06:00.500482 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-05-14 00:06:00.500494 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-05-14 00:06:00.500505 | orchestrator | ++ export ARA=false 2025-05-14 00:06:00.500516 | orchestrator | ++ ARA=false 2025-05-14 00:06:00.500528 | orchestrator | ++ export TEMPEST=false 2025-05-14 00:06:00.500539 | orchestrator | ++ TEMPEST=false 2025-05-14 00:06:00.500549 | orchestrator | ++ export IS_ZUUL=true 2025-05-14 00:06:00.500560 | orchestrator | ++ IS_ZUUL=true 2025-05-14 00:06:00.500571 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.58 2025-05-14 00:06:00.500582 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.58 2025-05-14 00:06:00.500593 | orchestrator | ++ export EXTERNAL_API=false 2025-05-14 00:06:00.500603 | orchestrator | ++ EXTERNAL_API=false 2025-05-14 00:06:00.500614 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-05-14 00:06:00.500625 | orchestrator | ++ IMAGE_USER=ubuntu 2025-05-14 00:06:00.500636 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-05-14 00:06:00.500646 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-05-14 00:06:00.500657 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-05-14 00:06:00.500668 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-05-14 00:06:00.500679 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-05-14 00:06:00.500690 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2025-05-14 00:06:00.506291 | orchestrator | + set -e 2025-05-14 00:06:00.506336 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-14 00:06:00.506349 | orchestrator | ++ export INTERACTIVE=false 2025-05-14 00:06:00.506360 | orchestrator | ++ INTERACTIVE=false 2025-05-14 00:06:00.506371 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-14 00:06:00.506381 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-14 00:06:00.506392 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-05-14 00:06:00.506801 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-05-14 00:06:00.509230 | orchestrator | 2025-05-14 00:06:00.509253 | orchestrator | # Ceph status 2025-05-14 00:06:00.509265 | orchestrator | 2025-05-14 00:06:00.509276 | orchestrator | ++ export MANAGER_VERSION=latest 2025-05-14 00:06:00.509287 | orchestrator | ++ MANAGER_VERSION=latest 2025-05-14 00:06:00.509298 | orchestrator | + echo 2025-05-14 00:06:00.509309 | orchestrator | + echo '# Ceph status' 2025-05-14 00:06:00.509320 | orchestrator | + echo 2025-05-14 00:06:00.509331 | orchestrator | + ceph -s 2025-05-14 00:06:01.017833 | orchestrator | cluster: 2025-05-14 00:06:01.018090 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2025-05-14 00:06:01.018124 | orchestrator | health: HEALTH_OK 2025-05-14 00:06:01.018146 | orchestrator | 2025-05-14 00:06:01.018167 | orchestrator | services: 2025-05-14 00:06:01.018187 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 28m) 2025-05-14 00:06:01.018208 | orchestrator | mgr: testbed-node-2(active, since 17m), standbys: testbed-node-1, testbed-node-0 2025-05-14 00:06:01.018228 | orchestrator | mds: 1/1 daemons up, 2 standby 2025-05-14 00:06:01.018249 | orchestrator | osd: 6 osds: 6 up (since 25m), 6 in (since 26m) 2025-05-14 00:06:01.018269 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2025-05-14 00:06:01.018321 | orchestrator | 2025-05-14 00:06:01.018342 | orchestrator | data: 2025-05-14 00:06:01.018362 | orchestrator | volumes: 1/1 healthy 2025-05-14 00:06:01.018383 | orchestrator | pools: 14 pools, 401 pgs 2025-05-14 00:06:01.018405 | orchestrator | objects: 556 objects, 2.2 GiB 2025-05-14 00:06:01.018424 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2025-05-14 00:06:01.018444 | orchestrator | pgs: 401 active+clean 2025-05-14 00:06:01.018465 | orchestrator | 2025-05-14 00:06:01.045364 | orchestrator | 2025-05-14 00:06:01.045468 | orchestrator | # Ceph versions 2025-05-14 00:06:01.045484 | orchestrator | 2025-05-14 00:06:01.045496 | orchestrator | + echo 2025-05-14 00:06:01.045508 | orchestrator | + echo '# Ceph versions' 2025-05-14 00:06:01.045520 | orchestrator | + echo 2025-05-14 00:06:01.045531 | orchestrator | + ceph versions 2025-05-14 00:06:01.584136 | orchestrator | { 2025-05-14 00:06:01.585136 | orchestrator | "mon": { 2025-05-14 00:06:01.585180 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-05-14 00:06:01.585202 | orchestrator | }, 2025-05-14 00:06:01.585214 | orchestrator | "mgr": { 2025-05-14 00:06:01.585225 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-05-14 00:06:01.585236 | orchestrator | }, 2025-05-14 00:06:01.585246 | orchestrator | "osd": { 2025-05-14 00:06:01.585257 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2025-05-14 00:06:01.585268 | orchestrator | }, 2025-05-14 00:06:01.585279 | orchestrator | "mds": { 2025-05-14 00:06:01.585290 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-05-14 00:06:01.585300 | orchestrator | }, 2025-05-14 00:06:01.585311 | orchestrator | "rgw": { 2025-05-14 00:06:01.585322 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-05-14 00:06:01.585333 | orchestrator | }, 2025-05-14 00:06:01.585344 | orchestrator | "overall": { 2025-05-14 00:06:01.585356 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2025-05-14 00:06:01.585367 | orchestrator | } 2025-05-14 00:06:01.585378 | orchestrator | } 2025-05-14 00:06:01.621042 | orchestrator | 2025-05-14 00:06:01.621126 | orchestrator | # Ceph OSD tree 2025-05-14 00:06:01.621139 | orchestrator | 2025-05-14 00:06:01.621152 | orchestrator | + echo 2025-05-14 00:06:01.621163 | orchestrator | + echo '# Ceph OSD tree' 2025-05-14 00:06:01.621176 | orchestrator | + echo 2025-05-14 00:06:01.621187 | orchestrator | + ceph osd df tree 2025-05-14 00:06:02.129444 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2025-05-14 00:06:02.129553 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 430 MiB 113 GiB 5.92 1.00 - root default 2025-05-14 00:06:02.129568 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-3 2025-05-14 00:06:02.129580 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1008 MiB 939 MiB 1 KiB 70 MiB 19 GiB 4.93 0.83 189 up osd.0 2025-05-14 00:06:02.129592 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.3 GiB 1 KiB 74 MiB 19 GiB 6.90 1.17 201 up osd.3 2025-05-14 00:06:02.129603 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-4 2025-05-14 00:06:02.129615 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 74 MiB 19 GiB 6.40 1.08 190 up osd.1 2025-05-14 00:06:02.129627 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.0 GiB 1 KiB 70 MiB 19 GiB 5.44 0.92 202 up osd.4 2025-05-14 00:06:02.129638 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-5 2025-05-14 00:06:02.129650 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 70 MiB 19 GiB 6.49 1.10 191 up osd.2 2025-05-14 00:06:02.129662 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1019 MiB 1 KiB 74 MiB 19 GiB 5.34 0.90 197 up osd.5 2025-05-14 00:06:02.129673 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 430 MiB 113 GiB 5.92 2025-05-14 00:06:02.129715 | orchestrator | MIN/MAX VAR: 0.83/1.17 STDDEV: 0.72 2025-05-14 00:06:02.170432 | orchestrator | 2025-05-14 00:06:02.170514 | orchestrator | # Ceph monitor status 2025-05-14 00:06:02.170527 | orchestrator | 2025-05-14 00:06:02.170538 | orchestrator | + echo 2025-05-14 00:06:02.170550 | orchestrator | + echo '# Ceph monitor status' 2025-05-14 00:06:02.170561 | orchestrator | + echo 2025-05-14 00:06:02.170572 | orchestrator | + ceph mon stat 2025-05-14 00:06:02.755957 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 6, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2025-05-14 00:06:02.788945 | orchestrator | 2025-05-14 00:06:02.789106 | orchestrator | # Ceph quorum status 2025-05-14 00:06:02.789122 | orchestrator | 2025-05-14 00:06:02.789133 | orchestrator | + echo 2025-05-14 00:06:02.789240 | orchestrator | + echo '# Ceph quorum status' 2025-05-14 00:06:02.789253 | orchestrator | + echo 2025-05-14 00:06:02.789263 | orchestrator | + ceph quorum_status 2025-05-14 00:06:02.789284 | orchestrator | + jq 2025-05-14 00:06:03.430594 | orchestrator | { 2025-05-14 00:06:03.430696 | orchestrator | "election_epoch": 6, 2025-05-14 00:06:03.430712 | orchestrator | "quorum": [ 2025-05-14 00:06:03.430725 | orchestrator | 0, 2025-05-14 00:06:03.430736 | orchestrator | 1, 2025-05-14 00:06:03.430747 | orchestrator | 2 2025-05-14 00:06:03.430758 | orchestrator | ], 2025-05-14 00:06:03.430769 | orchestrator | "quorum_names": [ 2025-05-14 00:06:03.430780 | orchestrator | "testbed-node-0", 2025-05-14 00:06:03.430791 | orchestrator | "testbed-node-1", 2025-05-14 00:06:03.430802 | orchestrator | "testbed-node-2" 2025-05-14 00:06:03.430813 | orchestrator | ], 2025-05-14 00:06:03.430824 | orchestrator | "quorum_leader_name": "testbed-node-0", 2025-05-14 00:06:03.430838 | orchestrator | "quorum_age": 1737, 2025-05-14 00:06:03.430849 | orchestrator | "features": { 2025-05-14 00:06:03.430861 | orchestrator | "quorum_con": "4540138322906710015", 2025-05-14 00:06:03.430872 | orchestrator | "quorum_mon": [ 2025-05-14 00:06:03.430883 | orchestrator | "kraken", 2025-05-14 00:06:03.430893 | orchestrator | "luminous", 2025-05-14 00:06:03.430905 | orchestrator | "mimic", 2025-05-14 00:06:03.430952 | orchestrator | "osdmap-prune", 2025-05-14 00:06:03.430964 | orchestrator | "nautilus", 2025-05-14 00:06:03.430975 | orchestrator | "octopus", 2025-05-14 00:06:03.430986 | orchestrator | "pacific", 2025-05-14 00:06:03.430997 | orchestrator | "elector-pinging", 2025-05-14 00:06:03.431008 | orchestrator | "quincy", 2025-05-14 00:06:03.431019 | orchestrator | "reef" 2025-05-14 00:06:03.431030 | orchestrator | ] 2025-05-14 00:06:03.431041 | orchestrator | }, 2025-05-14 00:06:03.431075 | orchestrator | "monmap": { 2025-05-14 00:06:03.431104 | orchestrator | "epoch": 1, 2025-05-14 00:06:03.431129 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2025-05-14 00:06:03.431142 | orchestrator | "modified": "2025-05-13T23:36:43.258534Z", 2025-05-14 00:06:03.431155 | orchestrator | "created": "2025-05-13T23:36:43.258534Z", 2025-05-14 00:06:03.431167 | orchestrator | "min_mon_release": 18, 2025-05-14 00:06:03.431180 | orchestrator | "min_mon_release_name": "reef", 2025-05-14 00:06:03.431193 | orchestrator | "election_strategy": 1, 2025-05-14 00:06:03.431207 | orchestrator | "disallowed_leaders: ": "", 2025-05-14 00:06:03.431220 | orchestrator | "stretch_mode": false, 2025-05-14 00:06:03.431234 | orchestrator | "tiebreaker_mon": "", 2025-05-14 00:06:03.431246 | orchestrator | "removed_ranks: ": "", 2025-05-14 00:06:03.431267 | orchestrator | "features": { 2025-05-14 00:06:03.431288 | orchestrator | "persistent": [ 2025-05-14 00:06:03.431308 | orchestrator | "kraken", 2025-05-14 00:06:03.431327 | orchestrator | "luminous", 2025-05-14 00:06:03.431347 | orchestrator | "mimic", 2025-05-14 00:06:03.431365 | orchestrator | "osdmap-prune", 2025-05-14 00:06:03.431385 | orchestrator | "nautilus", 2025-05-14 00:06:03.431406 | orchestrator | "octopus", 2025-05-14 00:06:03.431427 | orchestrator | "pacific", 2025-05-14 00:06:03.431448 | orchestrator | "elector-pinging", 2025-05-14 00:06:03.431471 | orchestrator | "quincy", 2025-05-14 00:06:03.431490 | orchestrator | "reef" 2025-05-14 00:06:03.431512 | orchestrator | ], 2025-05-14 00:06:03.431524 | orchestrator | "optional": [] 2025-05-14 00:06:03.431536 | orchestrator | }, 2025-05-14 00:06:03.431547 | orchestrator | "mons": [ 2025-05-14 00:06:03.431557 | orchestrator | { 2025-05-14 00:06:03.431568 | orchestrator | "rank": 0, 2025-05-14 00:06:03.431579 | orchestrator | "name": "testbed-node-0", 2025-05-14 00:06:03.431616 | orchestrator | "public_addrs": { 2025-05-14 00:06:03.431627 | orchestrator | "addrvec": [ 2025-05-14 00:06:03.431639 | orchestrator | { 2025-05-14 00:06:03.431649 | orchestrator | "type": "v2", 2025-05-14 00:06:03.431660 | orchestrator | "addr": "192.168.16.10:3300", 2025-05-14 00:06:03.431671 | orchestrator | "nonce": 0 2025-05-14 00:06:03.431682 | orchestrator | }, 2025-05-14 00:06:03.431693 | orchestrator | { 2025-05-14 00:06:03.431704 | orchestrator | "type": "v1", 2025-05-14 00:06:03.431715 | orchestrator | "addr": "192.168.16.10:6789", 2025-05-14 00:06:03.431726 | orchestrator | "nonce": 0 2025-05-14 00:06:03.431737 | orchestrator | } 2025-05-14 00:06:03.431747 | orchestrator | ] 2025-05-14 00:06:03.431758 | orchestrator | }, 2025-05-14 00:06:03.431769 | orchestrator | "addr": "192.168.16.10:6789/0", 2025-05-14 00:06:03.431780 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2025-05-14 00:06:03.431790 | orchestrator | "priority": 0, 2025-05-14 00:06:03.431801 | orchestrator | "weight": 0, 2025-05-14 00:06:03.431812 | orchestrator | "crush_location": "{}" 2025-05-14 00:06:03.431822 | orchestrator | }, 2025-05-14 00:06:03.431833 | orchestrator | { 2025-05-14 00:06:03.431843 | orchestrator | "rank": 1, 2025-05-14 00:06:03.431861 | orchestrator | "name": "testbed-node-1", 2025-05-14 00:06:03.431872 | orchestrator | "public_addrs": { 2025-05-14 00:06:03.431883 | orchestrator | "addrvec": [ 2025-05-14 00:06:03.431894 | orchestrator | { 2025-05-14 00:06:03.431905 | orchestrator | "type": "v2", 2025-05-14 00:06:03.431963 | orchestrator | "addr": "192.168.16.11:3300", 2025-05-14 00:06:03.431983 | orchestrator | "nonce": 0 2025-05-14 00:06:03.432000 | orchestrator | }, 2025-05-14 00:06:03.432018 | orchestrator | { 2025-05-14 00:06:03.432029 | orchestrator | "type": "v1", 2025-05-14 00:06:03.432040 | orchestrator | "addr": "192.168.16.11:6789", 2025-05-14 00:06:03.432051 | orchestrator | "nonce": 0 2025-05-14 00:06:03.432062 | orchestrator | } 2025-05-14 00:06:03.432072 | orchestrator | ] 2025-05-14 00:06:03.432083 | orchestrator | }, 2025-05-14 00:06:03.432094 | orchestrator | "addr": "192.168.16.11:6789/0", 2025-05-14 00:06:03.432105 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2025-05-14 00:06:03.432115 | orchestrator | "priority": 0, 2025-05-14 00:06:03.432126 | orchestrator | "weight": 0, 2025-05-14 00:06:03.432137 | orchestrator | "crush_location": "{}" 2025-05-14 00:06:03.432148 | orchestrator | }, 2025-05-14 00:06:03.432158 | orchestrator | { 2025-05-14 00:06:03.432169 | orchestrator | "rank": 2, 2025-05-14 00:06:03.432181 | orchestrator | "name": "testbed-node-2", 2025-05-14 00:06:03.432201 | orchestrator | "public_addrs": { 2025-05-14 00:06:03.432220 | orchestrator | "addrvec": [ 2025-05-14 00:06:03.432238 | orchestrator | { 2025-05-14 00:06:03.432256 | orchestrator | "type": "v2", 2025-05-14 00:06:03.432272 | orchestrator | "addr": "192.168.16.12:3300", 2025-05-14 00:06:03.432289 | orchestrator | "nonce": 0 2025-05-14 00:06:03.432306 | orchestrator | }, 2025-05-14 00:06:03.432323 | orchestrator | { 2025-05-14 00:06:03.432340 | orchestrator | "type": "v1", 2025-05-14 00:06:03.432358 | orchestrator | "addr": "192.168.16.12:6789", 2025-05-14 00:06:03.432376 | orchestrator | "nonce": 0 2025-05-14 00:06:03.432394 | orchestrator | } 2025-05-14 00:06:03.432413 | orchestrator | ] 2025-05-14 00:06:03.432432 | orchestrator | }, 2025-05-14 00:06:03.432450 | orchestrator | "addr": "192.168.16.12:6789/0", 2025-05-14 00:06:03.432470 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2025-05-14 00:06:03.432489 | orchestrator | "priority": 0, 2025-05-14 00:06:03.432508 | orchestrator | "weight": 0, 2025-05-14 00:06:03.432526 | orchestrator | "crush_location": "{}" 2025-05-14 00:06:03.432541 | orchestrator | } 2025-05-14 00:06:03.432566 | orchestrator | ] 2025-05-14 00:06:03.432588 | orchestrator | } 2025-05-14 00:06:03.432606 | orchestrator | } 2025-05-14 00:06:03.432624 | orchestrator | 2025-05-14 00:06:03.432642 | orchestrator | # Ceph free space status 2025-05-14 00:06:03.432659 | orchestrator | 2025-05-14 00:06:03.432677 | orchestrator | + echo 2025-05-14 00:06:03.432694 | orchestrator | + echo '# Ceph free space status' 2025-05-14 00:06:03.432711 | orchestrator | + echo 2025-05-14 00:06:03.432729 | orchestrator | + ceph df 2025-05-14 00:06:03.975219 | orchestrator | --- RAW STORAGE --- 2025-05-14 00:06:03.975326 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2025-05-14 00:06:03.975342 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-05-14 00:06:03.975379 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-05-14 00:06:03.975391 | orchestrator | 2025-05-14 00:06:03.975402 | orchestrator | --- POOLS --- 2025-05-14 00:06:03.975414 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2025-05-14 00:06:03.975427 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 53 GiB 2025-05-14 00:06:03.975438 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2025-05-14 00:06:03.975449 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2025-05-14 00:06:03.975460 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2025-05-14 00:06:03.975470 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2025-05-14 00:06:03.975481 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2025-05-14 00:06:03.975492 | orchestrator | default.rgw.log 7 32 3.6 KiB 209 408 KiB 0 35 GiB 2025-05-14 00:06:03.975502 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2025-05-14 00:06:03.975513 | orchestrator | .rgw.root 9 32 3.9 KiB 8 64 KiB 0 53 GiB 2025-05-14 00:06:03.975524 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2025-05-14 00:06:03.975534 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2025-05-14 00:06:03.975545 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.93 35 GiB 2025-05-14 00:06:03.975556 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2025-05-14 00:06:03.975567 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2025-05-14 00:06:04.024785 | orchestrator | ++ semver latest 5.0.0 2025-05-14 00:06:04.060799 | orchestrator | + [[ -1 -eq -1 ]] 2025-05-14 00:06:04.060888 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-05-14 00:06:04.060903 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2025-05-14 00:06:04.060946 | orchestrator | + osism apply facts 2025-05-14 00:06:05.879039 | orchestrator | 2025-05-14 00:06:05 | INFO  | Task 3efbb38b-8940-488e-b9e3-3cd27aad1247 (facts) was prepared for execution. 2025-05-14 00:06:05.879112 | orchestrator | 2025-05-14 00:06:05 | INFO  | It takes a moment until task 3efbb38b-8940-488e-b9e3-3cd27aad1247 (facts) has been started and output is visible here. 2025-05-14 00:06:10.249546 | orchestrator | 2025-05-14 00:06:10.249691 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-05-14 00:06:10.250451 | orchestrator | 2025-05-14 00:06:10.250755 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-05-14 00:06:10.253076 | orchestrator | Wednesday 14 May 2025 00:06:10 +0000 (0:00:00.289) 0:00:00.289 ********* 2025-05-14 00:06:10.932284 | orchestrator | ok: [testbed-manager] 2025-05-14 00:06:11.746417 | orchestrator | ok: [testbed-node-0] 2025-05-14 00:06:11.746515 | orchestrator | ok: [testbed-node-1] 2025-05-14 00:06:11.750795 | orchestrator | ok: [testbed-node-2] 2025-05-14 00:06:11.750827 | orchestrator | ok: [testbed-node-3] 2025-05-14 00:06:11.750832 | orchestrator | ok: [testbed-node-4] 2025-05-14 00:06:11.751346 | orchestrator | ok: [testbed-node-5] 2025-05-14 00:06:11.751711 | orchestrator | 2025-05-14 00:06:11.752293 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-05-14 00:06:11.752306 | orchestrator | Wednesday 14 May 2025 00:06:11 +0000 (0:00:01.489) 0:00:01.778 ********* 2025-05-14 00:06:11.926866 | orchestrator | skipping: [testbed-manager] 2025-05-14 00:06:12.018982 | orchestrator | skipping: [testbed-node-0] 2025-05-14 00:06:12.100332 | orchestrator | skipping: [testbed-node-1] 2025-05-14 00:06:12.178222 | orchestrator | skipping: [testbed-node-2] 2025-05-14 00:06:12.260377 | orchestrator | skipping: [testbed-node-3] 2025-05-14 00:06:13.015308 | orchestrator | skipping: [testbed-node-4] 2025-05-14 00:06:13.017642 | orchestrator | skipping: [testbed-node-5] 2025-05-14 00:06:13.018281 | orchestrator | 2025-05-14 00:06:13.019256 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-14 00:06:13.020735 | orchestrator | 2025-05-14 00:06:13.024257 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-14 00:06:13.024731 | orchestrator | Wednesday 14 May 2025 00:06:13 +0000 (0:00:01.274) 0:00:03.053 ********* 2025-05-14 00:06:18.364561 | orchestrator | ok: [testbed-node-1] 2025-05-14 00:06:18.366512 | orchestrator | ok: [testbed-node-0] 2025-05-14 00:06:18.367723 | orchestrator | ok: [testbed-node-2] 2025-05-14 00:06:18.371842 | orchestrator | ok: [testbed-manager] 2025-05-14 00:06:18.372279 | orchestrator | ok: [testbed-node-3] 2025-05-14 00:06:18.373337 | orchestrator | ok: [testbed-node-4] 2025-05-14 00:06:18.373646 | orchestrator | ok: [testbed-node-5] 2025-05-14 00:06:18.375521 | orchestrator | 2025-05-14 00:06:18.376749 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-05-14 00:06:18.379980 | orchestrator | 2025-05-14 00:06:18.380594 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-05-14 00:06:18.381491 | orchestrator | Wednesday 14 May 2025 00:06:18 +0000 (0:00:05.353) 0:00:08.406 ********* 2025-05-14 00:06:18.554892 | orchestrator | skipping: [testbed-manager] 2025-05-14 00:06:18.664184 | orchestrator | skipping: [testbed-node-0] 2025-05-14 00:06:18.747607 | orchestrator | skipping: [testbed-node-1] 2025-05-14 00:06:18.830216 | orchestrator | skipping: [testbed-node-2] 2025-05-14 00:06:18.913495 | orchestrator | skipping: [testbed-node-3] 2025-05-14 00:06:18.956508 | orchestrator | skipping: [testbed-node-4] 2025-05-14 00:06:18.957146 | orchestrator | skipping: [testbed-node-5] 2025-05-14 00:06:18.958199 | orchestrator | 2025-05-14 00:06:18.959677 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 00:06:18.960284 | orchestrator | 2025-05-14 00:06:18 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-14 00:06:18.960851 | orchestrator | 2025-05-14 00:06:18 | INFO  | Please wait and do not abort execution. 2025-05-14 00:06:18.961863 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 00:06:18.962247 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 00:06:18.962890 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 00:06:18.963545 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 00:06:18.964398 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 00:06:18.965604 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 00:06:18.966173 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 00:06:18.967247 | orchestrator | 2025-05-14 00:06:18.967855 | orchestrator | 2025-05-14 00:06:18.968591 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 00:06:18.969245 | orchestrator | Wednesday 14 May 2025 00:06:18 +0000 (0:00:00.591) 0:00:08.998 ********* 2025-05-14 00:06:18.969751 | orchestrator | =============================================================================== 2025-05-14 00:06:18.970309 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.35s 2025-05-14 00:06:18.971075 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.49s 2025-05-14 00:06:18.971568 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.27s 2025-05-14 00:06:18.972356 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.59s 2025-05-14 00:06:19.677589 | orchestrator | + osism validate ceph-mons 2025-05-14 00:06:40.697439 | orchestrator | 2025-05-14 00:06:40.697547 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2025-05-14 00:06:40.697561 | orchestrator | 2025-05-14 00:06:40.697571 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-05-14 00:06:40.697580 | orchestrator | Wednesday 14 May 2025 00:06:25 +0000 (0:00:00.407) 0:00:00.407 ********* 2025-05-14 00:06:40.697590 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-14 00:06:40.697599 | orchestrator | 2025-05-14 00:06:40.697608 | orchestrator | TASK [Create report output directory] ****************************************** 2025-05-14 00:06:40.697617 | orchestrator | Wednesday 14 May 2025 00:06:25 +0000 (0:00:00.664) 0:00:01.071 ********* 2025-05-14 00:06:40.697788 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-14 00:06:40.697801 | orchestrator | 2025-05-14 00:06:40.697828 | orchestrator | TASK [Define report vars] ****************************************************** 2025-05-14 00:06:40.697838 | orchestrator | Wednesday 14 May 2025 00:06:26 +0000 (0:00:00.839) 0:00:01.910 ********* 2025-05-14 00:06:40.697847 | orchestrator | ok: [testbed-node-0] 2025-05-14 00:06:40.697857 | orchestrator | 2025-05-14 00:06:40.697866 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-05-14 00:06:40.697875 | orchestrator | Wednesday 14 May 2025 00:06:27 +0000 (0:00:00.264) 0:00:02.174 ********* 2025-05-14 00:06:40.697884 | orchestrator | ok: [testbed-node-0] 2025-05-14 00:06:40.697917 | orchestrator | ok: [testbed-node-1] 2025-05-14 00:06:40.697927 | orchestrator | ok: [testbed-node-2] 2025-05-14 00:06:40.697936 | orchestrator | 2025-05-14 00:06:40.697945 | orchestrator | TASK [Get container info] ****************************************************** 2025-05-14 00:06:40.697954 | orchestrator | Wednesday 14 May 2025 00:06:27 +0000 (0:00:00.312) 0:00:02.487 ********* 2025-05-14 00:06:40.697962 | orchestrator | ok: [testbed-node-0] 2025-05-14 00:06:40.697972 | orchestrator | ok: [testbed-node-2] 2025-05-14 00:06:40.697982 | orchestrator | ok: [testbed-node-1] 2025-05-14 00:06:40.697992 | orchestrator | 2025-05-14 00:06:40.698008 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-05-14 00:06:40.698069 | orchestrator | Wednesday 14 May 2025 00:06:28 +0000 (0:00:00.972) 0:00:03.459 ********* 2025-05-14 00:06:40.698080 | orchestrator | skipping: [testbed-node-0] 2025-05-14 00:06:40.698090 | orchestrator | skipping: [testbed-node-1] 2025-05-14 00:06:40.698100 | orchestrator | skipping: [testbed-node-2] 2025-05-14 00:06:40.698110 | orchestrator | 2025-05-14 00:06:40.698121 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-05-14 00:06:40.698132 | orchestrator | Wednesday 14 May 2025 00:06:28 +0000 (0:00:00.299) 0:00:03.759 ********* 2025-05-14 00:06:40.698142 | orchestrator | ok: [testbed-node-0] 2025-05-14 00:06:40.698152 | orchestrator | ok: [testbed-node-1] 2025-05-14 00:06:40.698162 | orchestrator | ok: [testbed-node-2] 2025-05-14 00:06:40.698172 | orchestrator | 2025-05-14 00:06:40.698182 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-05-14 00:06:40.698193 | orchestrator | Wednesday 14 May 2025 00:06:29 +0000 (0:00:00.543) 0:00:04.302 ********* 2025-05-14 00:06:40.698203 | orchestrator | ok: [testbed-node-0] 2025-05-14 00:06:40.698213 | orchestrator | ok: [testbed-node-1] 2025-05-14 00:06:40.698223 | orchestrator | ok: [testbed-node-2] 2025-05-14 00:06:40.698234 | orchestrator | 2025-05-14 00:06:40.698244 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2025-05-14 00:06:40.698254 | orchestrator | Wednesday 14 May 2025 00:06:29 +0000 (0:00:00.320) 0:00:04.623 ********* 2025-05-14 00:06:40.698264 | orchestrator | skipping: [testbed-node-0] 2025-05-14 00:06:40.698274 | orchestrator | skipping: [testbed-node-1] 2025-05-14 00:06:40.698284 | orchestrator | skipping: [testbed-node-2] 2025-05-14 00:06:40.698294 | orchestrator | 2025-05-14 00:06:40.698304 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2025-05-14 00:06:40.698314 | orchestrator | Wednesday 14 May 2025 00:06:29 +0000 (0:00:00.284) 0:00:04.907 ********* 2025-05-14 00:06:40.698346 | orchestrator | ok: [testbed-node-0] 2025-05-14 00:06:40.698356 | orchestrator | ok: [testbed-node-1] 2025-05-14 00:06:40.698364 | orchestrator | ok: [testbed-node-2] 2025-05-14 00:06:40.698373 | orchestrator | 2025-05-14 00:06:40.698382 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-05-14 00:06:40.698390 | orchestrator | Wednesday 14 May 2025 00:06:30 +0000 (0:00:00.295) 0:00:05.203 ********* 2025-05-14 00:06:40.698399 | orchestrator | skipping: [testbed-node-0] 2025-05-14 00:06:40.698408 | orchestrator | 2025-05-14 00:06:40.698416 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-05-14 00:06:40.698425 | orchestrator | Wednesday 14 May 2025 00:06:30 +0000 (0:00:00.711) 0:00:05.915 ********* 2025-05-14 00:06:40.698434 | orchestrator | skipping: [testbed-node-0] 2025-05-14 00:06:40.698442 | orchestrator | 2025-05-14 00:06:40.698451 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-05-14 00:06:40.698459 | orchestrator | Wednesday 14 May 2025 00:06:31 +0000 (0:00:00.275) 0:00:06.190 ********* 2025-05-14 00:06:40.698468 | orchestrator | skipping: [testbed-node-0] 2025-05-14 00:06:40.698477 | orchestrator | 2025-05-14 00:06:40.698485 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-14 00:06:40.698494 | orchestrator | Wednesday 14 May 2025 00:06:31 +0000 (0:00:00.256) 0:00:06.447 ********* 2025-05-14 00:06:40.698503 | orchestrator | 2025-05-14 00:06:40.698512 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-14 00:06:40.698521 | orchestrator | Wednesday 14 May 2025 00:06:31 +0000 (0:00:00.078) 0:00:06.525 ********* 2025-05-14 00:06:40.698529 | orchestrator | 2025-05-14 00:06:40.698538 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-14 00:06:40.698546 | orchestrator | Wednesday 14 May 2025 00:06:31 +0000 (0:00:00.072) 0:00:06.597 ********* 2025-05-14 00:06:40.698555 | orchestrator | 2025-05-14 00:06:40.698564 | orchestrator | TASK [Print report file information] ******************************************* 2025-05-14 00:06:40.698572 | orchestrator | Wednesday 14 May 2025 00:06:31 +0000 (0:00:00.073) 0:00:06.671 ********* 2025-05-14 00:06:40.698581 | orchestrator | skipping: [testbed-node-0] 2025-05-14 00:06:40.698590 | orchestrator | 2025-05-14 00:06:40.698599 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-05-14 00:06:40.698607 | orchestrator | Wednesday 14 May 2025 00:06:31 +0000 (0:00:00.255) 0:00:06.927 ********* 2025-05-14 00:06:40.698616 | orchestrator | skipping: [testbed-node-0] 2025-05-14 00:06:40.698625 | orchestrator | 2025-05-14 00:06:40.698651 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2025-05-14 00:06:40.698660 | orchestrator | Wednesday 14 May 2025 00:06:32 +0000 (0:00:00.270) 0:00:07.197 ********* 2025-05-14 00:06:40.698669 | orchestrator | ok: [testbed-node-0] 2025-05-14 00:06:40.698678 | orchestrator | 2025-05-14 00:06:40.698686 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2025-05-14 00:06:40.698695 | orchestrator | Wednesday 14 May 2025 00:06:32 +0000 (0:00:00.133) 0:00:07.331 ********* 2025-05-14 00:06:40.698703 | orchestrator | changed: [testbed-node-0] 2025-05-14 00:06:40.698712 | orchestrator | 2025-05-14 00:06:40.698721 | orchestrator | TASK [Set quorum test data] **************************************************** 2025-05-14 00:06:40.698729 | orchestrator | Wednesday 14 May 2025 00:06:33 +0000 (0:00:01.637) 0:00:08.968 ********* 2025-05-14 00:06:40.698738 | orchestrator | ok: [testbed-node-0] 2025-05-14 00:06:40.698746 | orchestrator | 2025-05-14 00:06:40.698755 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2025-05-14 00:06:40.698763 | orchestrator | Wednesday 14 May 2025 00:06:34 +0000 (0:00:00.286) 0:00:09.254 ********* 2025-05-14 00:06:40.698772 | orchestrator | skipping: [testbed-node-0] 2025-05-14 00:06:40.698780 | orchestrator | 2025-05-14 00:06:40.698789 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2025-05-14 00:06:40.698797 | orchestrator | Wednesday 14 May 2025 00:06:34 +0000 (0:00:00.322) 0:00:09.576 ********* 2025-05-14 00:06:40.698806 | orchestrator | ok: [testbed-node-0] 2025-05-14 00:06:40.698822 | orchestrator | 2025-05-14 00:06:40.698831 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2025-05-14 00:06:40.698840 | orchestrator | Wednesday 14 May 2025 00:06:34 +0000 (0:00:00.253) 0:00:09.830 ********* 2025-05-14 00:06:40.698848 | orchestrator | ok: [testbed-node-0] 2025-05-14 00:06:40.698857 | orchestrator | 2025-05-14 00:06:40.698865 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2025-05-14 00:06:40.698874 | orchestrator | Wednesday 14 May 2025 00:06:34 +0000 (0:00:00.232) 0:00:10.062 ********* 2025-05-14 00:06:40.698887 | orchestrator | skipping: [testbed-node-0] 2025-05-14 00:06:40.698930 | orchestrator | 2025-05-14 00:06:40.698939 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2025-05-14 00:06:40.698948 | orchestrator | Wednesday 14 May 2025 00:06:35 +0000 (0:00:00.139) 0:00:10.202 ********* 2025-05-14 00:06:40.698956 | orchestrator | ok: [testbed-node-0] 2025-05-14 00:06:40.698965 | orchestrator | 2025-05-14 00:06:40.698974 | orchestrator | TASK [Prepare status test vars] ************************************************ 2025-05-14 00:06:40.698982 | orchestrator | Wednesday 14 May 2025 00:06:35 +0000 (0:00:00.136) 0:00:10.339 ********* 2025-05-14 00:06:40.698991 | orchestrator | ok: [testbed-node-0] 2025-05-14 00:06:40.698999 | orchestrator | 2025-05-14 00:06:40.699008 | orchestrator | TASK [Gather status data] ****************************************************** 2025-05-14 00:06:40.699016 | orchestrator | Wednesday 14 May 2025 00:06:35 +0000 (0:00:00.120) 0:00:10.459 ********* 2025-05-14 00:06:40.699025 | orchestrator | changed: [testbed-node-0] 2025-05-14 00:06:40.699074 | orchestrator | 2025-05-14 00:06:40.699085 | orchestrator | TASK [Set health test data] **************************************************** 2025-05-14 00:06:40.699094 | orchestrator | Wednesday 14 May 2025 00:06:36 +0000 (0:00:01.278) 0:00:11.738 ********* 2025-05-14 00:06:40.699102 | orchestrator | ok: [testbed-node-0] 2025-05-14 00:06:40.699111 | orchestrator | 2025-05-14 00:06:40.699119 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2025-05-14 00:06:40.699128 | orchestrator | Wednesday 14 May 2025 00:06:36 +0000 (0:00:00.229) 0:00:11.967 ********* 2025-05-14 00:06:40.699136 | orchestrator | skipping: [testbed-node-0] 2025-05-14 00:06:40.699145 | orchestrator | 2025-05-14 00:06:40.699153 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2025-05-14 00:06:40.699162 | orchestrator | Wednesday 14 May 2025 00:06:36 +0000 (0:00:00.134) 0:00:12.102 ********* 2025-05-14 00:06:40.699171 | orchestrator | ok: [testbed-node-0] 2025-05-14 00:06:40.699179 | orchestrator | 2025-05-14 00:06:40.699188 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2025-05-14 00:06:40.699196 | orchestrator | Wednesday 14 May 2025 00:06:37 +0000 (0:00:00.171) 0:00:12.273 ********* 2025-05-14 00:06:40.699205 | orchestrator | skipping: [testbed-node-0] 2025-05-14 00:06:40.699213 | orchestrator | 2025-05-14 00:06:40.699222 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2025-05-14 00:06:40.699230 | orchestrator | Wednesday 14 May 2025 00:06:37 +0000 (0:00:00.121) 0:00:12.395 ********* 2025-05-14 00:06:40.699239 | orchestrator | skipping: [testbed-node-0] 2025-05-14 00:06:40.699247 | orchestrator | 2025-05-14 00:06:40.699256 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-05-14 00:06:40.699264 | orchestrator | Wednesday 14 May 2025 00:06:37 +0000 (0:00:00.341) 0:00:12.737 ********* 2025-05-14 00:06:40.699273 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-14 00:06:40.699281 | orchestrator | 2025-05-14 00:06:40.699290 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-05-14 00:06:40.699298 | orchestrator | Wednesday 14 May 2025 00:06:37 +0000 (0:00:00.266) 0:00:13.004 ********* 2025-05-14 00:06:40.699307 | orchestrator | skipping: [testbed-node-0] 2025-05-14 00:06:40.699315 | orchestrator | 2025-05-14 00:06:40.699324 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-05-14 00:06:40.699333 | orchestrator | Wednesday 14 May 2025 00:06:38 +0000 (0:00:00.283) 0:00:13.287 ********* 2025-05-14 00:06:40.699341 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-14 00:06:40.699358 | orchestrator | 2025-05-14 00:06:40.699367 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-05-14 00:06:40.699375 | orchestrator | Wednesday 14 May 2025 00:06:39 +0000 (0:00:01.713) 0:00:15.000 ********* 2025-05-14 00:06:40.699384 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-14 00:06:40.699396 | orchestrator | 2025-05-14 00:06:40.699405 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-05-14 00:06:40.699414 | orchestrator | Wednesday 14 May 2025 00:06:40 +0000 (0:00:00.289) 0:00:15.290 ********* 2025-05-14 00:06:40.699422 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-14 00:06:40.699431 | orchestrator | 2025-05-14 00:06:40.699446 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-14 00:06:43.217719 | orchestrator | Wednesday 14 May 2025 00:06:40 +0000 (0:00:00.254) 0:00:15.545 ********* 2025-05-14 00:06:43.217822 | orchestrator | 2025-05-14 00:06:43.217838 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-14 00:06:43.217849 | orchestrator | Wednesday 14 May 2025 00:06:40 +0000 (0:00:00.085) 0:00:15.631 ********* 2025-05-14 00:06:43.217860 | orchestrator | 2025-05-14 00:06:43.217871 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-14 00:06:43.217882 | orchestrator | Wednesday 14 May 2025 00:06:40 +0000 (0:00:00.080) 0:00:15.711 ********* 2025-05-14 00:06:43.217970 | orchestrator | 2025-05-14 00:06:43.217991 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-05-14 00:06:43.218002 | orchestrator | Wednesday 14 May 2025 00:06:40 +0000 (0:00:00.080) 0:00:15.791 ********* 2025-05-14 00:06:43.218014 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-14 00:06:43.218109 | orchestrator | 2025-05-14 00:06:43.218122 | orchestrator | TASK [Print report file information] ******************************************* 2025-05-14 00:06:43.218133 | orchestrator | Wednesday 14 May 2025 00:06:42 +0000 (0:00:01.588) 0:00:17.380 ********* 2025-05-14 00:06:43.218143 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-05-14 00:06:43.218154 | orchestrator |  "msg": [ 2025-05-14 00:06:43.218168 | orchestrator |  "Validator run completed.", 2025-05-14 00:06:43.218181 | orchestrator |  "You can find the report file here:", 2025-05-14 00:06:43.218192 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2025-05-14T00:06:25+00:00-report.json", 2025-05-14 00:06:43.218204 | orchestrator |  "on the following host:", 2025-05-14 00:06:43.218216 | orchestrator |  "testbed-manager" 2025-05-14 00:06:43.218226 | orchestrator |  ] 2025-05-14 00:06:43.218238 | orchestrator | } 2025-05-14 00:06:43.218249 | orchestrator | 2025-05-14 00:06:43.218263 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 00:06:43.218277 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-05-14 00:06:43.218291 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 00:06:43.218304 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 00:06:43.218316 | orchestrator | 2025-05-14 00:06:43.218328 | orchestrator | 2025-05-14 00:06:43.218340 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 00:06:43.218354 | orchestrator | Wednesday 14 May 2025 00:06:42 +0000 (0:00:00.604) 0:00:17.984 ********* 2025-05-14 00:06:43.218367 | orchestrator | =============================================================================== 2025-05-14 00:06:43.218379 | orchestrator | Aggregate test results step one ----------------------------------------- 1.71s 2025-05-14 00:06:43.218412 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.64s 2025-05-14 00:06:43.218425 | orchestrator | Write report file ------------------------------------------------------- 1.59s 2025-05-14 00:06:43.218469 | orchestrator | Gather status data ------------------------------------------------------ 1.28s 2025-05-14 00:06:43.218482 | orchestrator | Get container info ------------------------------------------------------ 0.97s 2025-05-14 00:06:43.218493 | orchestrator | Create report output directory ------------------------------------------ 0.84s 2025-05-14 00:06:43.218507 | orchestrator | Aggregate test results step one ----------------------------------------- 0.71s 2025-05-14 00:06:43.218520 | orchestrator | Get timestamp for report file ------------------------------------------- 0.66s 2025-05-14 00:06:43.218532 | orchestrator | Print report file information ------------------------------------------- 0.60s 2025-05-14 00:06:43.218543 | orchestrator | Set test result to passed if container is existing ---------------------- 0.54s 2025-05-14 00:06:43.218553 | orchestrator | Pass cluster-health if status is OK (strict) ---------------------------- 0.34s 2025-05-14 00:06:43.218564 | orchestrator | Fail quorum test if not all monitors are in quorum ---------------------- 0.32s 2025-05-14 00:06:43.218574 | orchestrator | Prepare test data ------------------------------------------------------- 0.32s 2025-05-14 00:06:43.218584 | orchestrator | Prepare test data for container existance test -------------------------- 0.31s 2025-05-14 00:06:43.218595 | orchestrator | Set test result to failed if container is missing ----------------------- 0.30s 2025-05-14 00:06:43.218605 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.30s 2025-05-14 00:06:43.218616 | orchestrator | Aggregate test results step two ----------------------------------------- 0.29s 2025-05-14 00:06:43.218627 | orchestrator | Set quorum test data ---------------------------------------------------- 0.29s 2025-05-14 00:06:43.218637 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.28s 2025-05-14 00:06:43.218648 | orchestrator | Set validation result to failed if a test failed ------------------------ 0.28s 2025-05-14 00:06:43.482971 | orchestrator | + osism validate ceph-mgrs 2025-05-14 00:07:04.410829 | orchestrator | 2025-05-14 00:07:04.410949 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2025-05-14 00:07:04.410963 | orchestrator | 2025-05-14 00:07:04.410972 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-05-14 00:07:04.410980 | orchestrator | Wednesday 14 May 2025 00:06:49 +0000 (0:00:00.440) 0:00:00.440 ********* 2025-05-14 00:07:04.410989 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-14 00:07:04.410997 | orchestrator | 2025-05-14 00:07:04.411005 | orchestrator | TASK [Create report output directory] ****************************************** 2025-05-14 00:07:04.411013 | orchestrator | Wednesday 14 May 2025 00:06:50 +0000 (0:00:00.663) 0:00:01.104 ********* 2025-05-14 00:07:04.411021 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-14 00:07:04.411029 | orchestrator | 2025-05-14 00:07:04.411036 | orchestrator | TASK [Define report vars] ****************************************************** 2025-05-14 00:07:04.411044 | orchestrator | Wednesday 14 May 2025 00:06:51 +0000 (0:00:00.867) 0:00:01.971 ********* 2025-05-14 00:07:04.411052 | orchestrator | ok: [testbed-node-0] 2025-05-14 00:07:04.411062 | orchestrator | 2025-05-14 00:07:04.411070 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-05-14 00:07:04.411078 | orchestrator | Wednesday 14 May 2025 00:06:51 +0000 (0:00:00.248) 0:00:02.219 ********* 2025-05-14 00:07:04.411086 | orchestrator | ok: [testbed-node-0] 2025-05-14 00:07:04.411093 | orchestrator | ok: [testbed-node-1] 2025-05-14 00:07:04.411102 | orchestrator | ok: [testbed-node-2] 2025-05-14 00:07:04.411110 | orchestrator | 2025-05-14 00:07:04.411118 | orchestrator | TASK [Get container info] ****************************************************** 2025-05-14 00:07:04.411126 | orchestrator | Wednesday 14 May 2025 00:06:51 +0000 (0:00:00.311) 0:00:02.531 ********* 2025-05-14 00:07:04.411134 | orchestrator | ok: [testbed-node-1] 2025-05-14 00:07:04.411142 | orchestrator | ok: [testbed-node-0] 2025-05-14 00:07:04.411150 | orchestrator | ok: [testbed-node-2] 2025-05-14 00:07:04.411157 | orchestrator | 2025-05-14 00:07:04.411165 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-05-14 00:07:04.411188 | orchestrator | Wednesday 14 May 2025 00:06:52 +0000 (0:00:00.983) 0:00:03.515 ********* 2025-05-14 00:07:04.411197 | orchestrator | skipping: [testbed-node-0] 2025-05-14 00:07:04.411205 | orchestrator | skipping: [testbed-node-1] 2025-05-14 00:07:04.411213 | orchestrator | skipping: [testbed-node-2] 2025-05-14 00:07:04.411221 | orchestrator | 2025-05-14 00:07:04.411229 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-05-14 00:07:04.411237 | orchestrator | Wednesday 14 May 2025 00:06:53 +0000 (0:00:00.288) 0:00:03.804 ********* 2025-05-14 00:07:04.411244 | orchestrator | ok: [testbed-node-0] 2025-05-14 00:07:04.411258 | orchestrator | ok: [testbed-node-1] 2025-05-14 00:07:04.411266 | orchestrator | ok: [testbed-node-2] 2025-05-14 00:07:04.411274 | orchestrator | 2025-05-14 00:07:04.411281 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-05-14 00:07:04.411289 | orchestrator | Wednesday 14 May 2025 00:06:53 +0000 (0:00:00.503) 0:00:04.307 ********* 2025-05-14 00:07:04.411297 | orchestrator | ok: [testbed-node-0] 2025-05-14 00:07:04.411305 | orchestrator | ok: [testbed-node-1] 2025-05-14 00:07:04.411313 | orchestrator | ok: [testbed-node-2] 2025-05-14 00:07:04.411320 | orchestrator | 2025-05-14 00:07:04.411328 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2025-05-14 00:07:04.411336 | orchestrator | Wednesday 14 May 2025 00:06:53 +0000 (0:00:00.303) 0:00:04.610 ********* 2025-05-14 00:07:04.411344 | orchestrator | skipping: [testbed-node-0] 2025-05-14 00:07:04.411352 | orchestrator | skipping: [testbed-node-1] 2025-05-14 00:07:04.411360 | orchestrator | skipping: [testbed-node-2] 2025-05-14 00:07:04.411367 | orchestrator | 2025-05-14 00:07:04.411375 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2025-05-14 00:07:04.411384 | orchestrator | Wednesday 14 May 2025 00:06:54 +0000 (0:00:00.344) 0:00:04.955 ********* 2025-05-14 00:07:04.411393 | orchestrator | ok: [testbed-node-0] 2025-05-14 00:07:04.411402 | orchestrator | ok: [testbed-node-1] 2025-05-14 00:07:04.411411 | orchestrator | ok: [testbed-node-2] 2025-05-14 00:07:04.411420 | orchestrator | 2025-05-14 00:07:04.411430 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-05-14 00:07:04.411439 | orchestrator | Wednesday 14 May 2025 00:06:54 +0000 (0:00:00.331) 0:00:05.287 ********* 2025-05-14 00:07:04.411448 | orchestrator | skipping: [testbed-node-0] 2025-05-14 00:07:04.411457 | orchestrator | 2025-05-14 00:07:04.411466 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-05-14 00:07:04.411476 | orchestrator | Wednesday 14 May 2025 00:06:55 +0000 (0:00:00.755) 0:00:06.042 ********* 2025-05-14 00:07:04.411484 | orchestrator | skipping: [testbed-node-0] 2025-05-14 00:07:04.411493 | orchestrator | 2025-05-14 00:07:04.411502 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-05-14 00:07:04.411510 | orchestrator | Wednesday 14 May 2025 00:06:55 +0000 (0:00:00.236) 0:00:06.279 ********* 2025-05-14 00:07:04.411519 | orchestrator | skipping: [testbed-node-0] 2025-05-14 00:07:04.411528 | orchestrator | 2025-05-14 00:07:04.411538 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-14 00:07:04.411547 | orchestrator | Wednesday 14 May 2025 00:06:55 +0000 (0:00:00.243) 0:00:06.522 ********* 2025-05-14 00:07:04.411556 | orchestrator | 2025-05-14 00:07:04.411565 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-14 00:07:04.411574 | orchestrator | Wednesday 14 May 2025 00:06:55 +0000 (0:00:00.070) 0:00:06.593 ********* 2025-05-14 00:07:04.411583 | orchestrator | 2025-05-14 00:07:04.411592 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-14 00:07:04.411602 | orchestrator | Wednesday 14 May 2025 00:06:55 +0000 (0:00:00.070) 0:00:06.664 ********* 2025-05-14 00:07:04.411610 | orchestrator | 2025-05-14 00:07:04.411620 | orchestrator | TASK [Print report file information] ******************************************* 2025-05-14 00:07:04.411629 | orchestrator | Wednesday 14 May 2025 00:06:55 +0000 (0:00:00.072) 0:00:06.736 ********* 2025-05-14 00:07:04.411639 | orchestrator | skipping: [testbed-node-0] 2025-05-14 00:07:04.411654 | orchestrator | 2025-05-14 00:07:04.411664 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-05-14 00:07:04.411673 | orchestrator | Wednesday 14 May 2025 00:06:56 +0000 (0:00:00.245) 0:00:06.981 ********* 2025-05-14 00:07:04.411682 | orchestrator | skipping: [testbed-node-0] 2025-05-14 00:07:04.411692 | orchestrator | 2025-05-14 00:07:04.411716 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2025-05-14 00:07:04.411726 | orchestrator | Wednesday 14 May 2025 00:06:56 +0000 (0:00:00.246) 0:00:07.228 ********* 2025-05-14 00:07:04.411737 | orchestrator | ok: [testbed-node-0] 2025-05-14 00:07:04.411747 | orchestrator | 2025-05-14 00:07:04.411756 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2025-05-14 00:07:04.411765 | orchestrator | Wednesday 14 May 2025 00:06:56 +0000 (0:00:00.122) 0:00:07.350 ********* 2025-05-14 00:07:04.411772 | orchestrator | changed: [testbed-node-0] 2025-05-14 00:07:04.411780 | orchestrator | 2025-05-14 00:07:04.411789 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2025-05-14 00:07:04.411796 | orchestrator | Wednesday 14 May 2025 00:06:58 +0000 (0:00:01.932) 0:00:09.283 ********* 2025-05-14 00:07:04.411804 | orchestrator | ok: [testbed-node-0] 2025-05-14 00:07:04.411812 | orchestrator | 2025-05-14 00:07:04.411820 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2025-05-14 00:07:04.411828 | orchestrator | Wednesday 14 May 2025 00:06:58 +0000 (0:00:00.253) 0:00:09.536 ********* 2025-05-14 00:07:04.411836 | orchestrator | ok: [testbed-node-0] 2025-05-14 00:07:04.411844 | orchestrator | 2025-05-14 00:07:04.411852 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2025-05-14 00:07:04.411860 | orchestrator | Wednesday 14 May 2025 00:06:59 +0000 (0:00:00.712) 0:00:10.249 ********* 2025-05-14 00:07:04.411868 | orchestrator | skipping: [testbed-node-0] 2025-05-14 00:07:04.411876 | orchestrator | 2025-05-14 00:07:04.411884 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2025-05-14 00:07:04.411906 | orchestrator | Wednesday 14 May 2025 00:06:59 +0000 (0:00:00.140) 0:00:10.389 ********* 2025-05-14 00:07:04.411914 | orchestrator | ok: [testbed-node-0] 2025-05-14 00:07:04.411922 | orchestrator | 2025-05-14 00:07:04.411930 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-05-14 00:07:04.411938 | orchestrator | Wednesday 14 May 2025 00:06:59 +0000 (0:00:00.175) 0:00:10.565 ********* 2025-05-14 00:07:04.411946 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-14 00:07:04.411954 | orchestrator | 2025-05-14 00:07:04.411962 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-05-14 00:07:04.411970 | orchestrator | Wednesday 14 May 2025 00:07:00 +0000 (0:00:00.267) 0:00:10.832 ********* 2025-05-14 00:07:04.411977 | orchestrator | skipping: [testbed-node-0] 2025-05-14 00:07:04.411985 | orchestrator | 2025-05-14 00:07:04.411993 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-05-14 00:07:04.412002 | orchestrator | Wednesday 14 May 2025 00:07:00 +0000 (0:00:00.232) 0:00:11.065 ********* 2025-05-14 00:07:04.412010 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-14 00:07:04.412018 | orchestrator | 2025-05-14 00:07:04.412026 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-05-14 00:07:04.412034 | orchestrator | Wednesday 14 May 2025 00:07:01 +0000 (0:00:01.238) 0:00:12.304 ********* 2025-05-14 00:07:04.412041 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-14 00:07:04.412049 | orchestrator | 2025-05-14 00:07:04.412057 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-05-14 00:07:04.412065 | orchestrator | Wednesday 14 May 2025 00:07:01 +0000 (0:00:00.234) 0:00:12.538 ********* 2025-05-14 00:07:04.412073 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-14 00:07:04.412081 | orchestrator | 2025-05-14 00:07:04.412089 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-14 00:07:04.412097 | orchestrator | Wednesday 14 May 2025 00:07:02 +0000 (0:00:00.298) 0:00:12.836 ********* 2025-05-14 00:07:04.412109 | orchestrator | 2025-05-14 00:07:04.412117 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-14 00:07:04.412126 | orchestrator | Wednesday 14 May 2025 00:07:02 +0000 (0:00:00.070) 0:00:12.907 ********* 2025-05-14 00:07:04.412133 | orchestrator | 2025-05-14 00:07:04.412141 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-14 00:07:04.412149 | orchestrator | Wednesday 14 May 2025 00:07:02 +0000 (0:00:00.069) 0:00:12.976 ********* 2025-05-14 00:07:04.412157 | orchestrator | 2025-05-14 00:07:04.412165 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-05-14 00:07:04.412173 | orchestrator | Wednesday 14 May 2025 00:07:02 +0000 (0:00:00.071) 0:00:13.048 ********* 2025-05-14 00:07:04.412181 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-14 00:07:04.412188 | orchestrator | 2025-05-14 00:07:04.412196 | orchestrator | TASK [Print report file information] ******************************************* 2025-05-14 00:07:04.412204 | orchestrator | Wednesday 14 May 2025 00:07:03 +0000 (0:00:01.679) 0:00:14.727 ********* 2025-05-14 00:07:04.412212 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-05-14 00:07:04.412220 | orchestrator |  "msg": [ 2025-05-14 00:07:04.412229 | orchestrator |  "Validator run completed.", 2025-05-14 00:07:04.412238 | orchestrator |  "You can find the report file here:", 2025-05-14 00:07:04.412246 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2025-05-14T00:06:50+00:00-report.json", 2025-05-14 00:07:04.412254 | orchestrator |  "on the following host:", 2025-05-14 00:07:04.412262 | orchestrator |  "testbed-manager" 2025-05-14 00:07:04.412270 | orchestrator |  ] 2025-05-14 00:07:04.412279 | orchestrator | } 2025-05-14 00:07:04.412287 | orchestrator | 2025-05-14 00:07:04.412295 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 00:07:04.412305 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-05-14 00:07:04.412314 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 00:07:04.412328 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 00:07:04.717197 | orchestrator | 2025-05-14 00:07:04.717322 | orchestrator | 2025-05-14 00:07:04.717340 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 00:07:04.717353 | orchestrator | Wednesday 14 May 2025 00:07:04 +0000 (0:00:00.416) 0:00:15.143 ********* 2025-05-14 00:07:04.717365 | orchestrator | =============================================================================== 2025-05-14 00:07:04.717376 | orchestrator | Gather list of mgr modules ---------------------------------------------- 1.93s 2025-05-14 00:07:04.717387 | orchestrator | Write report file ------------------------------------------------------- 1.68s 2025-05-14 00:07:04.717398 | orchestrator | Aggregate test results step one ----------------------------------------- 1.24s 2025-05-14 00:07:04.717408 | orchestrator | Get container info ------------------------------------------------------ 0.98s 2025-05-14 00:07:04.717419 | orchestrator | Create report output directory ------------------------------------------ 0.87s 2025-05-14 00:07:04.717430 | orchestrator | Aggregate test results step one ----------------------------------------- 0.76s 2025-05-14 00:07:04.717440 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.71s 2025-05-14 00:07:04.717458 | orchestrator | Get timestamp for report file ------------------------------------------- 0.66s 2025-05-14 00:07:04.717476 | orchestrator | Set test result to passed if container is existing ---------------------- 0.50s 2025-05-14 00:07:04.717508 | orchestrator | Print report file information ------------------------------------------- 0.42s 2025-05-14 00:07:04.717528 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.34s 2025-05-14 00:07:04.717578 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.33s 2025-05-14 00:07:04.717595 | orchestrator | Prepare test data for container existance test -------------------------- 0.31s 2025-05-14 00:07:04.717632 | orchestrator | Prepare test data ------------------------------------------------------- 0.30s 2025-05-14 00:07:04.717652 | orchestrator | Aggregate test results step three --------------------------------------- 0.30s 2025-05-14 00:07:04.717670 | orchestrator | Set test result to failed if container is missing ----------------------- 0.29s 2025-05-14 00:07:04.717691 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.27s 2025-05-14 00:07:04.717710 | orchestrator | Parse mgr module list from json ----------------------------------------- 0.25s 2025-05-14 00:07:04.717736 | orchestrator | Define report vars ------------------------------------------------------ 0.25s 2025-05-14 00:07:04.717757 | orchestrator | Fail due to missing containers ------------------------------------------ 0.25s 2025-05-14 00:07:04.985938 | orchestrator | + osism validate ceph-osds 2025-05-14 00:07:15.791025 | orchestrator | 2025-05-14 00:07:15.791139 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2025-05-14 00:07:15.791157 | orchestrator | 2025-05-14 00:07:15.791169 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-05-14 00:07:15.791181 | orchestrator | Wednesday 14 May 2025 00:07:11 +0000 (0:00:00.425) 0:00:00.425 ********* 2025-05-14 00:07:15.791193 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-14 00:07:15.791204 | orchestrator | 2025-05-14 00:07:15.791215 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-14 00:07:15.791227 | orchestrator | Wednesday 14 May 2025 00:07:11 +0000 (0:00:00.632) 0:00:01.057 ********* 2025-05-14 00:07:15.791238 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-14 00:07:15.791249 | orchestrator | 2025-05-14 00:07:15.791260 | orchestrator | TASK [Create report output directory] ****************************************** 2025-05-14 00:07:15.791271 | orchestrator | Wednesday 14 May 2025 00:07:12 +0000 (0:00:00.395) 0:00:01.453 ********* 2025-05-14 00:07:15.791281 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-14 00:07:15.791292 | orchestrator | 2025-05-14 00:07:15.791303 | orchestrator | TASK [Define report vars] ****************************************************** 2025-05-14 00:07:15.791314 | orchestrator | Wednesday 14 May 2025 00:07:13 +0000 (0:00:01.036) 0:00:02.489 ********* 2025-05-14 00:07:15.791325 | orchestrator | ok: [testbed-node-3] 2025-05-14 00:07:15.791338 | orchestrator | 2025-05-14 00:07:15.791350 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-05-14 00:07:15.791361 | orchestrator | Wednesday 14 May 2025 00:07:13 +0000 (0:00:00.126) 0:00:02.616 ********* 2025-05-14 00:07:15.791372 | orchestrator | skipping: [testbed-node-3] 2025-05-14 00:07:15.791383 | orchestrator | 2025-05-14 00:07:15.791394 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-05-14 00:07:15.791405 | orchestrator | Wednesday 14 May 2025 00:07:13 +0000 (0:00:00.156) 0:00:02.773 ********* 2025-05-14 00:07:15.791416 | orchestrator | skipping: [testbed-node-3] 2025-05-14 00:07:15.791427 | orchestrator | skipping: [testbed-node-4] 2025-05-14 00:07:15.791439 | orchestrator | skipping: [testbed-node-5] 2025-05-14 00:07:15.791450 | orchestrator | 2025-05-14 00:07:15.791461 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-05-14 00:07:15.791471 | orchestrator | Wednesday 14 May 2025 00:07:13 +0000 (0:00:00.306) 0:00:03.079 ********* 2025-05-14 00:07:15.791482 | orchestrator | ok: [testbed-node-3] 2025-05-14 00:07:15.791493 | orchestrator | 2025-05-14 00:07:15.791507 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-05-14 00:07:15.791520 | orchestrator | Wednesday 14 May 2025 00:07:14 +0000 (0:00:00.143) 0:00:03.223 ********* 2025-05-14 00:07:15.791533 | orchestrator | ok: [testbed-node-3] 2025-05-14 00:07:15.791546 | orchestrator | ok: [testbed-node-4] 2025-05-14 00:07:15.791559 | orchestrator | ok: [testbed-node-5] 2025-05-14 00:07:15.791573 | orchestrator | 2025-05-14 00:07:15.791586 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2025-05-14 00:07:15.791622 | orchestrator | Wednesday 14 May 2025 00:07:14 +0000 (0:00:00.342) 0:00:03.566 ********* 2025-05-14 00:07:15.791635 | orchestrator | ok: [testbed-node-3] 2025-05-14 00:07:15.791647 | orchestrator | 2025-05-14 00:07:15.791661 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-05-14 00:07:15.791673 | orchestrator | Wednesday 14 May 2025 00:07:14 +0000 (0:00:00.575) 0:00:04.142 ********* 2025-05-14 00:07:15.791686 | orchestrator | ok: [testbed-node-3] 2025-05-14 00:07:15.791698 | orchestrator | ok: [testbed-node-4] 2025-05-14 00:07:15.791711 | orchestrator | ok: [testbed-node-5] 2025-05-14 00:07:15.791724 | orchestrator | 2025-05-14 00:07:15.791737 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2025-05-14 00:07:15.791750 | orchestrator | Wednesday 14 May 2025 00:07:15 +0000 (0:00:00.539) 0:00:04.681 ********* 2025-05-14 00:07:15.791766 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'a60a503938250a92c2324901258e95aae31d760f1fed68dd923500fa6fd73460', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2025-05-14 00:07:15.791783 | orchestrator | skipping: [testbed-node-3] => (item={'id': '4e4b537f14803bcf68114739b235ac7566eef48278d83b82c6b5204af4bc0834', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-05-14 00:07:15.791796 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'eb0e9ce7ce6a47614234c218fbf9af6aa5c62b9e9315143982d65b044ad54b66', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-05-14 00:07:15.791811 | orchestrator | skipping: [testbed-node-3] => (item={'id': '73b53ffe0400225225dd42e2e189c50cca0a66ec85062ad1feac7d53184e0c00', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-05-14 00:07:15.791838 | orchestrator | skipping: [testbed-node-3] => (item={'id': '57d58d8d877512900a988bb64fbb80e7d66bad7e12dd06a9b98b27fc785c7b70', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-05-14 00:07:15.791880 | orchestrator | skipping: [testbed-node-3] => (item={'id': '29164ef1c6d6e51706cbf516aa330f1618f275ad6e7438bf00712ff960a0aa01', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-05-14 00:07:15.791920 | orchestrator | skipping: [testbed-node-3] => (item={'id': '353581fb201b52380e310c1c8f84329534b198ba6a66db1bea261f6de99f7e43', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 16 minutes'})  2025-05-14 00:07:15.791941 | orchestrator | skipping: [testbed-node-3] => (item={'id': '89a6eceb657576520d5b419c1a6af1c2d4c36efbc1d4f9b0068c6cbc9141c4da', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 16 minutes'})  2025-05-14 00:07:15.791962 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'e10620ed2f5f15f9665942addbdfa88027d9e93baf07941211dd3d103120aa6c', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 17 minutes'})  2025-05-14 00:07:15.791986 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'a5f62e1dc4b98d4592018dcd5cdd7e82e5c61256d51ea9c918179c2e8d872fcd', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 22 minutes'})  2025-05-14 00:07:15.792002 | orchestrator | skipping: [testbed-node-3] => (item={'id': '76db262639244d2235ce122cc8661e31eac95932cce7fc5abd8c282c542f72df', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 23 minutes'})  2025-05-14 00:07:15.792024 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'a057e48a5ebda9132f44b09b05ac61922183232c55b49e1129a497614916faa3', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 24 minutes'})  2025-05-14 00:07:15.792036 | orchestrator | ok: [testbed-node-3] => (item={'id': '4a5720523d2b60aa5e229e9821d592af8a606958f078bc2c0fb7c8fd4e12e9fe', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-05-14 00:07:15.792047 | orchestrator | ok: [testbed-node-3] => (item={'id': 'f39dff3f667b1394d6cfb4be0d1b5c0f81ceaee1e21362cbb520b2fa8c424153', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-05-14 00:07:15.792059 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'a7706256b9599ded980a7980020f9662456b13586e6360f72430fcd6f17f6646', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 28 minutes'})  2025-05-14 00:07:15.792070 | orchestrator | skipping: [testbed-node-3] => (item={'id': '221a8331bcb88e3b627af810644995eb4f15c09cfac5e2618e398872799e45a1', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2025-05-14 00:07:15.792082 | orchestrator | skipping: [testbed-node-3] => (item={'id': '6c3658e8eed3eab54a62e8fb7904c0fbad3291615a7f8ca71d8c2247f3dd8187', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2025-05-14 00:07:15.792093 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'ee11159bf71259e72d1234a96b3477f7e70766be9a2077bf77b8c08b76878c24', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 31 minutes'})  2025-05-14 00:07:15.792104 | orchestrator | skipping: [testbed-node-3] => (item={'id': '9b1e3b982e28eae1bbb3f82a70b20b9f8f2268e8f063e7abd7dfd995466ebd93', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 31 minutes'})  2025-05-14 00:07:15.792115 | orchestrator | skipping: [testbed-node-3] => (item={'id': '479ad5cbbfe646196923679802d384793653abd3d90f082cd3a659d16740909e', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 32 minutes'})  2025-05-14 00:07:15.792127 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c8e4ee08c24bf1a9f5bafbeda5c01384ebd581904a3d80366c6ae7d91f61270c', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2025-05-14 00:07:15.792146 | orchestrator | skipping: [testbed-node-4] => (item={'id': '36ab8ab6548a47c39655292651d82d3690bd8bec0a0affd6ac505bb8b0ff61dd', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-05-14 00:07:16.044600 | orchestrator | skipping: [testbed-node-4] => (item={'id': '3f5cb989bf1b8816eb13f19e11c078ce10a7a5c0bbcdfff3c2f8d857887791a8', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-05-14 00:07:16.044711 | orchestrator | skipping: [testbed-node-4] => (item={'id': '4605a271b2bd52ccb0b98c0c1222289062ba953301379d50a40267c8b6492e59', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-05-14 00:07:16.044732 | orchestrator | skipping: [testbed-node-4] => (item={'id': '72c0023e653cc766318a5ae0bc19e15915668c97bcf91a97f2439182952204dc', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-05-14 00:07:16.044775 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c28cf64e0e0a4318576cb1b3ed0804c716521b2a646f307324b42037398009f4', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-05-14 00:07:16.044793 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'dc3ae9bfae98df08363b68f21ada2f3a8dc7a183ab5d9892de812a15d7120a75', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 16 minutes'})  2025-05-14 00:07:16.044808 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'd79874922ff88196199eb14308625dab58fab0841fdb9387390ca89574e41836', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 16 minutes'})  2025-05-14 00:07:16.044823 | orchestrator | skipping: [testbed-node-4] => (item={'id': '6f76d2219fec14a079fcad13003173613b5b69ea8e765c0c5b05f0cf09ba5fb0', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 17 minutes'})  2025-05-14 00:07:16.044839 | orchestrator | skipping: [testbed-node-4] => (item={'id': '1ea1604bb250a89237f17b3a7e7af250ab17e15dcc5f5057398191fffcd245d9', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 22 minutes'})  2025-05-14 00:07:16.044854 | orchestrator | skipping: [testbed-node-4] => (item={'id': '660e481f0e39aca99af914818334a448c421966a47fe4f94dc7ef29e6ba4e98b', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 23 minutes'})  2025-05-14 00:07:16.044870 | orchestrator | skipping: [testbed-node-4] => (item={'id': '2fb4010d8b320e1239217e83b2f339cf58b99fc2ecc0158a5bb0274600b26aa5', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 24 minutes'})  2025-05-14 00:07:16.044888 | orchestrator | ok: [testbed-node-4] => (item={'id': '1c8d2ad390456a3f0ee6b427525a4cce974c3642e436e5d80dd047bb4ab4669f', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-05-14 00:07:16.044968 | orchestrator | ok: [testbed-node-4] => (item={'id': '2d822fe23937f062ade39481d0ad3a37e1c6920db600744bf6b5ccf6dfbf9854', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-05-14 00:07:16.044985 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f3ea4ef9cbff6a02eebc949b4146cae141d892632c8e4e28bef5c10413ca55bc', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 28 minutes'})  2025-05-14 00:07:16.045018 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'de30ab4e39ebcb1bd43499b9400713016d538e99fa878ca9dc16805ffe9b3ea6', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2025-05-14 00:07:16.045040 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e93e0a296d53e68dc4c0af38f4eaef4fb113a78958b1dba0344ace9c4ce56054', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2025-05-14 00:07:16.045075 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'd301d06ab6470388e2d61610dfc1c6bca8e0b82528c3234e71bd4d057f76147b', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 31 minutes'})  2025-05-14 00:07:16.045092 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'b8ba27d9cf19f96f66e3ccfbe2a2b388201a50d04c79a5409d1996d6aac0c570', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 31 minutes'})  2025-05-14 00:07:16.045121 | orchestrator | skipping: [testbed-node-4] => (item={'id': '559cdb3f2303e7130fe3874dd3013cd7c9338d4c6120b6c6c17d83b287b3c6c8', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 32 minutes'})  2025-05-14 00:07:16.045138 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'de0c62890670b9e35b1c2dd0bf13f61e42e350f94096511762d6fcb708c1d676', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2025-05-14 00:07:16.045154 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f2ff1269b3de9cafd366141d335358dedba7e441c0c58a3c6f281ba187b8a92d', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-05-14 00:07:16.045170 | orchestrator | skipping: [testbed-node-5] => (item={'id': '04a1c8730a46a7428453bd2dff54725d97b4582d0abba4ac2375393a8d6eb131', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-05-14 00:07:16.045186 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f216809c3aa4148c40970461a46a8af783c876823858a7b6a96017faf94f3b99', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-05-14 00:07:16.045202 | orchestrator | skipping: [testbed-node-5] => (item={'id': '719eea5c28cea2c340e513dac10ac2e9e1e2bd98c5d281d35eee6a56685c58d9', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-05-14 00:07:16.045219 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'c68bc84929a65e527ab002bced8c9819469a360f21bef956afebc967cdd26cf4', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-05-14 00:07:16.045235 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b2f2416466351177173cda99eb68fa363a48c75a50043d2729ec8a29cb163f90', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 16 minutes'})  2025-05-14 00:07:16.045251 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'a200f1b5c80ac8145593a3d7208de02ec9ebee8bfc3ab15e1687d335363c104b', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 16 minutes'})  2025-05-14 00:07:16.045267 | orchestrator | skipping: [testbed-node-5] => (item={'id': '56ac39a7d18a0f6be877ea7e6a7db58f93b39d0551c1e4899b1265f06bea1d28', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 17 minutes'})  2025-05-14 00:07:16.045283 | orchestrator | skipping: [testbed-node-5] => (item={'id': '4265f9c64f1ff4e9104cdcb8f27cfe38c996938fdc5ed718d4a1ce853be65742', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 22 minutes'})  2025-05-14 00:07:16.045300 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'ef0e71c6e820ef68c542084cb4f4a55a7150f576ce68cd4ed67c84f11a0a4686', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 23 minutes'})  2025-05-14 00:07:16.045322 | orchestrator | skipping: [testbed-node-5] => (item={'id': '7a9c3fdca885d5eba6f8b61c2e346a14d66fdaf9a2573c11be542b3efddf5639', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 24 minutes'})  2025-05-14 00:07:16.045349 | orchestrator | ok: [testbed-node-5] => (item={'id': '78b4dd2de8edca962db01f55a4f6b323caf88fd90b4a84d2044a5fd80c6a2cab', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-05-14 00:07:24.549997 | orchestrator | ok: [testbed-node-5] => (item={'id': 'cf797790c382760a874808785335d62a43e10ed561835aaf6fbe07216a7b7ac1', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-05-14 00:07:24.550176 | orchestrator | skipping: [testbed-node-5] => (item={'id': '457c9d860490affa1b02084a607b348259456ca5423eaf584b51cd6957e04ecc', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 28 minutes'})  2025-05-14 00:07:24.550195 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'c6f3cbb2725554caa22215e8543ac69bf49348c85777aca9d04fd140cae1cd57', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2025-05-14 00:07:24.550214 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'fda6a15e5c785d6e4842d4a04d2576e2f38b9b5b10ec83889d7f0d179e0398e0', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2025-05-14 00:07:24.550233 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'dd747c726a0df67dda99b94c8107b36e3932f88740b10c79d9a5a7aedd680bfa', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 31 minutes'})  2025-05-14 00:07:24.550252 | orchestrator | skipping: [testbed-node-5] => (item={'id': '1c048a0ecac9dbcf4f48a99484c53917278c2ef79d9309426ba70a0e33197400', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 31 minutes'})  2025-05-14 00:07:24.550272 | orchestrator | skipping: [testbed-node-5] => (item={'id': '717dcfb32d33f1116e0bd323a5c11ab5518f0207fcd127de6bca43483efdee42', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 32 minutes'})  2025-05-14 00:07:24.550291 | orchestrator | 2025-05-14 00:07:24.550312 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2025-05-14 00:07:24.550334 | orchestrator | Wednesday 14 May 2025 00:07:16 +0000 (0:00:00.506) 0:00:05.188 ********* 2025-05-14 00:07:24.550354 | orchestrator | ok: [testbed-node-3] 2025-05-14 00:07:24.550369 | orchestrator | ok: [testbed-node-4] 2025-05-14 00:07:24.550380 | orchestrator | ok: [testbed-node-5] 2025-05-14 00:07:24.550391 | orchestrator | 2025-05-14 00:07:24.550403 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2025-05-14 00:07:24.550414 | orchestrator | Wednesday 14 May 2025 00:07:16 +0000 (0:00:00.279) 0:00:05.467 ********* 2025-05-14 00:07:24.550425 | orchestrator | skipping: [testbed-node-3] 2025-05-14 00:07:24.550437 | orchestrator | skipping: [testbed-node-4] 2025-05-14 00:07:24.550448 | orchestrator | skipping: [testbed-node-5] 2025-05-14 00:07:24.550459 | orchestrator | 2025-05-14 00:07:24.550473 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2025-05-14 00:07:24.550486 | orchestrator | Wednesday 14 May 2025 00:07:16 +0000 (0:00:00.532) 0:00:06.000 ********* 2025-05-14 00:07:24.550502 | orchestrator | ok: [testbed-node-3] 2025-05-14 00:07:24.550523 | orchestrator | ok: [testbed-node-4] 2025-05-14 00:07:24.550541 | orchestrator | ok: [testbed-node-5] 2025-05-14 00:07:24.550562 | orchestrator | 2025-05-14 00:07:24.550581 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-05-14 00:07:24.550600 | orchestrator | Wednesday 14 May 2025 00:07:17 +0000 (0:00:00.326) 0:00:06.327 ********* 2025-05-14 00:07:24.550622 | orchestrator | ok: [testbed-node-3] 2025-05-14 00:07:24.550641 | orchestrator | ok: [testbed-node-4] 2025-05-14 00:07:24.550662 | orchestrator | ok: [testbed-node-5] 2025-05-14 00:07:24.550682 | orchestrator | 2025-05-14 00:07:24.550701 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2025-05-14 00:07:24.550722 | orchestrator | Wednesday 14 May 2025 00:07:17 +0000 (0:00:00.324) 0:00:06.651 ********* 2025-05-14 00:07:24.550777 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2025-05-14 00:07:24.550800 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2025-05-14 00:07:24.550813 | orchestrator | skipping: [testbed-node-3] 2025-05-14 00:07:24.550827 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2025-05-14 00:07:24.550839 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2025-05-14 00:07:24.550849 | orchestrator | skipping: [testbed-node-4] 2025-05-14 00:07:24.550860 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2025-05-14 00:07:24.550871 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2025-05-14 00:07:24.550882 | orchestrator | skipping: [testbed-node-5] 2025-05-14 00:07:24.550925 | orchestrator | 2025-05-14 00:07:24.550954 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2025-05-14 00:07:24.550966 | orchestrator | Wednesday 14 May 2025 00:07:17 +0000 (0:00:00.306) 0:00:06.957 ********* 2025-05-14 00:07:24.550977 | orchestrator | ok: [testbed-node-3] 2025-05-14 00:07:24.550988 | orchestrator | ok: [testbed-node-4] 2025-05-14 00:07:24.551008 | orchestrator | ok: [testbed-node-5] 2025-05-14 00:07:24.551029 | orchestrator | 2025-05-14 00:07:24.551074 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-05-14 00:07:24.551093 | orchestrator | Wednesday 14 May 2025 00:07:18 +0000 (0:00:00.521) 0:00:07.479 ********* 2025-05-14 00:07:24.551104 | orchestrator | skipping: [testbed-node-3] 2025-05-14 00:07:24.551115 | orchestrator | skipping: [testbed-node-4] 2025-05-14 00:07:24.551129 | orchestrator | skipping: [testbed-node-5] 2025-05-14 00:07:24.551148 | orchestrator | 2025-05-14 00:07:24.551167 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-05-14 00:07:24.551186 | orchestrator | Wednesday 14 May 2025 00:07:18 +0000 (0:00:00.317) 0:00:07.797 ********* 2025-05-14 00:07:24.551205 | orchestrator | skipping: [testbed-node-3] 2025-05-14 00:07:24.551223 | orchestrator | skipping: [testbed-node-4] 2025-05-14 00:07:24.551234 | orchestrator | skipping: [testbed-node-5] 2025-05-14 00:07:24.551245 | orchestrator | 2025-05-14 00:07:24.551256 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2025-05-14 00:07:24.551267 | orchestrator | Wednesday 14 May 2025 00:07:18 +0000 (0:00:00.311) 0:00:08.109 ********* 2025-05-14 00:07:24.551277 | orchestrator | ok: [testbed-node-3] 2025-05-14 00:07:24.551288 | orchestrator | ok: [testbed-node-4] 2025-05-14 00:07:24.551300 | orchestrator | ok: [testbed-node-5] 2025-05-14 00:07:24.551311 | orchestrator | 2025-05-14 00:07:24.551322 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-05-14 00:07:24.551332 | orchestrator | Wednesday 14 May 2025 00:07:19 +0000 (0:00:00.300) 0:00:08.410 ********* 2025-05-14 00:07:24.551343 | orchestrator | skipping: [testbed-node-3] 2025-05-14 00:07:24.551353 | orchestrator | 2025-05-14 00:07:24.551364 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-05-14 00:07:24.551375 | orchestrator | Wednesday 14 May 2025 00:07:19 +0000 (0:00:00.664) 0:00:09.074 ********* 2025-05-14 00:07:24.551385 | orchestrator | skipping: [testbed-node-3] 2025-05-14 00:07:24.551396 | orchestrator | 2025-05-14 00:07:24.551407 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-05-14 00:07:24.551417 | orchestrator | Wednesday 14 May 2025 00:07:20 +0000 (0:00:00.258) 0:00:09.332 ********* 2025-05-14 00:07:24.551428 | orchestrator | skipping: [testbed-node-3] 2025-05-14 00:07:24.551439 | orchestrator | 2025-05-14 00:07:24.551449 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-14 00:07:24.551460 | orchestrator | Wednesday 14 May 2025 00:07:20 +0000 (0:00:00.262) 0:00:09.595 ********* 2025-05-14 00:07:24.551471 | orchestrator | 2025-05-14 00:07:24.551481 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-14 00:07:24.551506 | orchestrator | Wednesday 14 May 2025 00:07:20 +0000 (0:00:00.070) 0:00:09.666 ********* 2025-05-14 00:07:24.551517 | orchestrator | 2025-05-14 00:07:24.551530 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-14 00:07:24.551549 | orchestrator | Wednesday 14 May 2025 00:07:20 +0000 (0:00:00.072) 0:00:09.738 ********* 2025-05-14 00:07:24.551568 | orchestrator | 2025-05-14 00:07:24.551586 | orchestrator | TASK [Print report file information] ******************************************* 2025-05-14 00:07:24.551603 | orchestrator | Wednesday 14 May 2025 00:07:20 +0000 (0:00:00.085) 0:00:09.824 ********* 2025-05-14 00:07:24.551614 | orchestrator | skipping: [testbed-node-3] 2025-05-14 00:07:24.551625 | orchestrator | 2025-05-14 00:07:24.551636 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2025-05-14 00:07:24.551647 | orchestrator | Wednesday 14 May 2025 00:07:20 +0000 (0:00:00.265) 0:00:10.089 ********* 2025-05-14 00:07:24.551657 | orchestrator | skipping: [testbed-node-3] 2025-05-14 00:07:24.551668 | orchestrator | 2025-05-14 00:07:24.551678 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-05-14 00:07:24.551689 | orchestrator | Wednesday 14 May 2025 00:07:21 +0000 (0:00:00.224) 0:00:10.314 ********* 2025-05-14 00:07:24.551699 | orchestrator | ok: [testbed-node-3] 2025-05-14 00:07:24.551710 | orchestrator | ok: [testbed-node-4] 2025-05-14 00:07:24.551721 | orchestrator | ok: [testbed-node-5] 2025-05-14 00:07:24.551731 | orchestrator | 2025-05-14 00:07:24.551742 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2025-05-14 00:07:24.551753 | orchestrator | Wednesday 14 May 2025 00:07:21 +0000 (0:00:00.293) 0:00:10.608 ********* 2025-05-14 00:07:24.551763 | orchestrator | ok: [testbed-node-3] 2025-05-14 00:07:24.551774 | orchestrator | 2025-05-14 00:07:24.551784 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2025-05-14 00:07:24.551795 | orchestrator | Wednesday 14 May 2025 00:07:22 +0000 (0:00:00.610) 0:00:11.218 ********* 2025-05-14 00:07:24.551805 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-14 00:07:24.551816 | orchestrator | 2025-05-14 00:07:24.551827 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2025-05-14 00:07:24.551837 | orchestrator | Wednesday 14 May 2025 00:07:23 +0000 (0:00:01.547) 0:00:12.765 ********* 2025-05-14 00:07:24.551848 | orchestrator | ok: [testbed-node-3] 2025-05-14 00:07:24.551858 | orchestrator | 2025-05-14 00:07:24.551869 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2025-05-14 00:07:24.551879 | orchestrator | Wednesday 14 May 2025 00:07:23 +0000 (0:00:00.136) 0:00:12.902 ********* 2025-05-14 00:07:24.551914 | orchestrator | ok: [testbed-node-3] 2025-05-14 00:07:24.551928 | orchestrator | 2025-05-14 00:07:24.551938 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2025-05-14 00:07:24.551949 | orchestrator | Wednesday 14 May 2025 00:07:23 +0000 (0:00:00.216) 0:00:13.118 ********* 2025-05-14 00:07:24.551960 | orchestrator | skipping: [testbed-node-3] 2025-05-14 00:07:24.551971 | orchestrator | 2025-05-14 00:07:24.551982 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2025-05-14 00:07:24.551992 | orchestrator | Wednesday 14 May 2025 00:07:24 +0000 (0:00:00.152) 0:00:13.271 ********* 2025-05-14 00:07:24.552004 | orchestrator | ok: [testbed-node-3] 2025-05-14 00:07:24.552023 | orchestrator | 2025-05-14 00:07:24.552042 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-05-14 00:07:24.552061 | orchestrator | Wednesday 14 May 2025 00:07:24 +0000 (0:00:00.119) 0:00:13.390 ********* 2025-05-14 00:07:24.552078 | orchestrator | ok: [testbed-node-3] 2025-05-14 00:07:24.552097 | orchestrator | ok: [testbed-node-4] 2025-05-14 00:07:24.552117 | orchestrator | ok: [testbed-node-5] 2025-05-14 00:07:24.552135 | orchestrator | 2025-05-14 00:07:24.552151 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2025-05-14 00:07:24.552171 | orchestrator | Wednesday 14 May 2025 00:07:24 +0000 (0:00:00.317) 0:00:13.707 ********* 2025-05-14 00:07:36.464244 | orchestrator | changed: [testbed-node-3] 2025-05-14 00:07:36.464380 | orchestrator | changed: [testbed-node-4] 2025-05-14 00:07:36.464396 | orchestrator | changed: [testbed-node-5] 2025-05-14 00:07:36.464408 | orchestrator | 2025-05-14 00:07:36.464421 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2025-05-14 00:07:36.464434 | orchestrator | Wednesday 14 May 2025 00:07:27 +0000 (0:00:02.479) 0:00:16.187 ********* 2025-05-14 00:07:36.464445 | orchestrator | ok: [testbed-node-3] 2025-05-14 00:07:36.464458 | orchestrator | ok: [testbed-node-4] 2025-05-14 00:07:36.464469 | orchestrator | ok: [testbed-node-5] 2025-05-14 00:07:36.464480 | orchestrator | 2025-05-14 00:07:36.464490 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2025-05-14 00:07:36.464502 | orchestrator | Wednesday 14 May 2025 00:07:27 +0000 (0:00:00.343) 0:00:16.530 ********* 2025-05-14 00:07:36.464513 | orchestrator | ok: [testbed-node-3] 2025-05-14 00:07:36.464523 | orchestrator | ok: [testbed-node-4] 2025-05-14 00:07:36.464534 | orchestrator | ok: [testbed-node-5] 2025-05-14 00:07:36.464546 | orchestrator | 2025-05-14 00:07:36.464557 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2025-05-14 00:07:36.464568 | orchestrator | Wednesday 14 May 2025 00:07:27 +0000 (0:00:00.404) 0:00:16.935 ********* 2025-05-14 00:07:36.464579 | orchestrator | skipping: [testbed-node-3] 2025-05-14 00:07:36.464590 | orchestrator | skipping: [testbed-node-4] 2025-05-14 00:07:36.464601 | orchestrator | skipping: [testbed-node-5] 2025-05-14 00:07:36.464612 | orchestrator | 2025-05-14 00:07:36.464623 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2025-05-14 00:07:36.464634 | orchestrator | Wednesday 14 May 2025 00:07:28 +0000 (0:00:00.287) 0:00:17.222 ********* 2025-05-14 00:07:36.464645 | orchestrator | ok: [testbed-node-3] 2025-05-14 00:07:36.464656 | orchestrator | ok: [testbed-node-4] 2025-05-14 00:07:36.464667 | orchestrator | ok: [testbed-node-5] 2025-05-14 00:07:36.464678 | orchestrator | 2025-05-14 00:07:36.464736 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2025-05-14 00:07:36.464749 | orchestrator | Wednesday 14 May 2025 00:07:28 +0000 (0:00:00.487) 0:00:17.710 ********* 2025-05-14 00:07:36.464760 | orchestrator | skipping: [testbed-node-3] 2025-05-14 00:07:36.464773 | orchestrator | skipping: [testbed-node-4] 2025-05-14 00:07:36.464786 | orchestrator | skipping: [testbed-node-5] 2025-05-14 00:07:36.464798 | orchestrator | 2025-05-14 00:07:36.464811 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2025-05-14 00:07:36.464824 | orchestrator | Wednesday 14 May 2025 00:07:28 +0000 (0:00:00.310) 0:00:18.021 ********* 2025-05-14 00:07:36.464838 | orchestrator | skipping: [testbed-node-3] 2025-05-14 00:07:36.464850 | orchestrator | skipping: [testbed-node-4] 2025-05-14 00:07:36.464862 | orchestrator | skipping: [testbed-node-5] 2025-05-14 00:07:36.464875 | orchestrator | 2025-05-14 00:07:36.464888 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-05-14 00:07:36.464959 | orchestrator | Wednesday 14 May 2025 00:07:29 +0000 (0:00:00.282) 0:00:18.303 ********* 2025-05-14 00:07:36.464980 | orchestrator | ok: [testbed-node-3] 2025-05-14 00:07:36.464999 | orchestrator | ok: [testbed-node-4] 2025-05-14 00:07:36.465012 | orchestrator | ok: [testbed-node-5] 2025-05-14 00:07:36.465037 | orchestrator | 2025-05-14 00:07:36.465049 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2025-05-14 00:07:36.465062 | orchestrator | Wednesday 14 May 2025 00:07:29 +0000 (0:00:00.406) 0:00:18.710 ********* 2025-05-14 00:07:36.465074 | orchestrator | ok: [testbed-node-3] 2025-05-14 00:07:36.465087 | orchestrator | ok: [testbed-node-4] 2025-05-14 00:07:36.465099 | orchestrator | ok: [testbed-node-5] 2025-05-14 00:07:36.465113 | orchestrator | 2025-05-14 00:07:36.465125 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2025-05-14 00:07:36.465137 | orchestrator | Wednesday 14 May 2025 00:07:30 +0000 (0:00:00.665) 0:00:19.375 ********* 2025-05-14 00:07:36.465148 | orchestrator | ok: [testbed-node-3] 2025-05-14 00:07:36.465158 | orchestrator | ok: [testbed-node-4] 2025-05-14 00:07:36.465169 | orchestrator | ok: [testbed-node-5] 2025-05-14 00:07:36.465190 | orchestrator | 2025-05-14 00:07:36.465200 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2025-05-14 00:07:36.465211 | orchestrator | Wednesday 14 May 2025 00:07:30 +0000 (0:00:00.313) 0:00:19.688 ********* 2025-05-14 00:07:36.465222 | orchestrator | skipping: [testbed-node-3] 2025-05-14 00:07:36.465233 | orchestrator | skipping: [testbed-node-4] 2025-05-14 00:07:36.465243 | orchestrator | skipping: [testbed-node-5] 2025-05-14 00:07:36.465254 | orchestrator | 2025-05-14 00:07:36.465265 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2025-05-14 00:07:36.465275 | orchestrator | Wednesday 14 May 2025 00:07:30 +0000 (0:00:00.315) 0:00:20.003 ********* 2025-05-14 00:07:36.465286 | orchestrator | ok: [testbed-node-3] 2025-05-14 00:07:36.465297 | orchestrator | ok: [testbed-node-4] 2025-05-14 00:07:36.465308 | orchestrator | ok: [testbed-node-5] 2025-05-14 00:07:36.465319 | orchestrator | 2025-05-14 00:07:36.465330 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-05-14 00:07:36.465341 | orchestrator | Wednesday 14 May 2025 00:07:31 +0000 (0:00:00.501) 0:00:20.505 ********* 2025-05-14 00:07:36.465351 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-14 00:07:36.465362 | orchestrator | 2025-05-14 00:07:36.465373 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-05-14 00:07:36.465383 | orchestrator | Wednesday 14 May 2025 00:07:31 +0000 (0:00:00.264) 0:00:20.770 ********* 2025-05-14 00:07:36.465394 | orchestrator | skipping: [testbed-node-3] 2025-05-14 00:07:36.465405 | orchestrator | 2025-05-14 00:07:36.465416 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-05-14 00:07:36.465426 | orchestrator | Wednesday 14 May 2025 00:07:31 +0000 (0:00:00.256) 0:00:21.027 ********* 2025-05-14 00:07:36.465437 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-14 00:07:36.465448 | orchestrator | 2025-05-14 00:07:36.465464 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-05-14 00:07:36.465475 | orchestrator | Wednesday 14 May 2025 00:07:33 +0000 (0:00:01.622) 0:00:22.649 ********* 2025-05-14 00:07:36.465486 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-14 00:07:36.465497 | orchestrator | 2025-05-14 00:07:36.465508 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-05-14 00:07:36.465518 | orchestrator | Wednesday 14 May 2025 00:07:33 +0000 (0:00:00.261) 0:00:22.911 ********* 2025-05-14 00:07:36.465548 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-14 00:07:36.465560 | orchestrator | 2025-05-14 00:07:36.465572 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-14 00:07:36.465582 | orchestrator | Wednesday 14 May 2025 00:07:34 +0000 (0:00:00.273) 0:00:23.184 ********* 2025-05-14 00:07:36.465593 | orchestrator | 2025-05-14 00:07:36.465604 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-14 00:07:36.465615 | orchestrator | Wednesday 14 May 2025 00:07:34 +0000 (0:00:00.073) 0:00:23.258 ********* 2025-05-14 00:07:36.465625 | orchestrator | 2025-05-14 00:07:36.465636 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-14 00:07:36.465647 | orchestrator | Wednesday 14 May 2025 00:07:34 +0000 (0:00:00.068) 0:00:23.326 ********* 2025-05-14 00:07:36.465658 | orchestrator | 2025-05-14 00:07:36.465669 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-05-14 00:07:36.465679 | orchestrator | Wednesday 14 May 2025 00:07:34 +0000 (0:00:00.080) 0:00:23.406 ********* 2025-05-14 00:07:36.465690 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-14 00:07:36.465701 | orchestrator | 2025-05-14 00:07:36.465712 | orchestrator | TASK [Print report file information] ******************************************* 2025-05-14 00:07:36.465722 | orchestrator | Wednesday 14 May 2025 00:07:35 +0000 (0:00:01.274) 0:00:24.680 ********* 2025-05-14 00:07:36.465733 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2025-05-14 00:07:36.465744 | orchestrator |  "msg": [ 2025-05-14 00:07:36.465762 | orchestrator |  "Validator run completed.", 2025-05-14 00:07:36.465773 | orchestrator |  "You can find the report file here:", 2025-05-14 00:07:36.465784 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2025-05-14T00:07:11+00:00-report.json", 2025-05-14 00:07:36.465796 | orchestrator |  "on the following host:", 2025-05-14 00:07:36.465807 | orchestrator |  "testbed-manager" 2025-05-14 00:07:36.465818 | orchestrator |  ] 2025-05-14 00:07:36.465830 | orchestrator | } 2025-05-14 00:07:36.465841 | orchestrator | 2025-05-14 00:07:36.465859 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 00:07:36.465881 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2025-05-14 00:07:36.465923 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-05-14 00:07:36.465942 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-05-14 00:07:36.465972 | orchestrator | 2025-05-14 00:07:36.465987 | orchestrator | 2025-05-14 00:07:36.466007 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 00:07:36.466108 | orchestrator | Wednesday 14 May 2025 00:07:36 +0000 (0:00:00.621) 0:00:25.302 ********* 2025-05-14 00:07:36.466121 | orchestrator | =============================================================================== 2025-05-14 00:07:36.466132 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.48s 2025-05-14 00:07:36.466143 | orchestrator | Aggregate test results step one ----------------------------------------- 1.62s 2025-05-14 00:07:36.466154 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.55s 2025-05-14 00:07:36.466164 | orchestrator | Write report file ------------------------------------------------------- 1.27s 2025-05-14 00:07:36.466175 | orchestrator | Create report output directory ------------------------------------------ 1.04s 2025-05-14 00:07:36.466186 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.67s 2025-05-14 00:07:36.466197 | orchestrator | Aggregate test results step one ----------------------------------------- 0.66s 2025-05-14 00:07:36.466207 | orchestrator | Get timestamp for report file ------------------------------------------- 0.63s 2025-05-14 00:07:36.466218 | orchestrator | Print report file information ------------------------------------------- 0.62s 2025-05-14 00:07:36.466229 | orchestrator | Set _mon_hostname fact -------------------------------------------------- 0.61s 2025-05-14 00:07:36.466239 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.58s 2025-05-14 00:07:36.466250 | orchestrator | Prepare test data ------------------------------------------------------- 0.54s 2025-05-14 00:07:36.466260 | orchestrator | Set test result to failed when count of containers is wrong ------------- 0.53s 2025-05-14 00:07:36.466271 | orchestrator | Get count of ceph-osd containers that are not running ------------------- 0.52s 2025-05-14 00:07:36.466282 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.51s 2025-05-14 00:07:36.466292 | orchestrator | Pass test if no sub test failed ----------------------------------------- 0.50s 2025-05-14 00:07:36.466303 | orchestrator | Pass if count of encrypted OSDs equals count of OSDs -------------------- 0.49s 2025-05-14 00:07:36.466313 | orchestrator | Prepare test data ------------------------------------------------------- 0.41s 2025-05-14 00:07:36.466324 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.40s 2025-05-14 00:07:36.466335 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.40s 2025-05-14 00:07:36.759609 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2025-05-14 00:07:36.764712 | orchestrator | + set -e 2025-05-14 00:07:36.764763 | orchestrator | + source /opt/manager-vars.sh 2025-05-14 00:07:36.764772 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-05-14 00:07:36.764779 | orchestrator | ++ NUMBER_OF_NODES=6 2025-05-14 00:07:36.764804 | orchestrator | ++ export CEPH_VERSION=reef 2025-05-14 00:07:36.764811 | orchestrator | ++ CEPH_VERSION=reef 2025-05-14 00:07:36.764817 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-05-14 00:07:36.764825 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-05-14 00:07:36.764832 | orchestrator | ++ export MANAGER_VERSION=latest 2025-05-14 00:07:36.764838 | orchestrator | ++ MANAGER_VERSION=latest 2025-05-14 00:07:36.764844 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-05-14 00:07:36.764851 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-05-14 00:07:36.764857 | orchestrator | ++ export ARA=false 2025-05-14 00:07:36.764863 | orchestrator | ++ ARA=false 2025-05-14 00:07:36.764869 | orchestrator | ++ export TEMPEST=false 2025-05-14 00:07:36.764876 | orchestrator | ++ TEMPEST=false 2025-05-14 00:07:36.764882 | orchestrator | ++ export IS_ZUUL=true 2025-05-14 00:07:36.764888 | orchestrator | ++ IS_ZUUL=true 2025-05-14 00:07:36.764931 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.58 2025-05-14 00:07:36.764938 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.58 2025-05-14 00:07:36.764945 | orchestrator | ++ export EXTERNAL_API=false 2025-05-14 00:07:36.764951 | orchestrator | ++ EXTERNAL_API=false 2025-05-14 00:07:36.764957 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-05-14 00:07:36.764963 | orchestrator | ++ IMAGE_USER=ubuntu 2025-05-14 00:07:36.764969 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-05-14 00:07:36.764975 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-05-14 00:07:36.764982 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-05-14 00:07:36.764988 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-05-14 00:07:36.764994 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-05-14 00:07:36.765000 | orchestrator | + source /etc/os-release 2025-05-14 00:07:36.765006 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.2 LTS' 2025-05-14 00:07:36.765012 | orchestrator | ++ NAME=Ubuntu 2025-05-14 00:07:36.765018 | orchestrator | ++ VERSION_ID=24.04 2025-05-14 00:07:36.765025 | orchestrator | ++ VERSION='24.04.2 LTS (Noble Numbat)' 2025-05-14 00:07:36.765031 | orchestrator | ++ VERSION_CODENAME=noble 2025-05-14 00:07:36.765037 | orchestrator | ++ ID=ubuntu 2025-05-14 00:07:36.765043 | orchestrator | ++ ID_LIKE=debian 2025-05-14 00:07:36.765049 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2025-05-14 00:07:36.765056 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2025-05-14 00:07:36.765062 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2025-05-14 00:07:36.765068 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2025-05-14 00:07:36.765075 | orchestrator | ++ UBUNTU_CODENAME=noble 2025-05-14 00:07:36.765082 | orchestrator | ++ LOGO=ubuntu-logo 2025-05-14 00:07:36.765088 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2025-05-14 00:07:36.765094 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2025-05-14 00:07:36.765103 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-05-14 00:07:36.789427 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-05-14 00:07:57.151397 | orchestrator | 2025-05-14 00:07:57.151485 | orchestrator | # Status of Elasticsearch 2025-05-14 00:07:57.151496 | orchestrator | 2025-05-14 00:07:57.151504 | orchestrator | + pushd /opt/configuration/contrib 2025-05-14 00:07:57.151514 | orchestrator | + echo 2025-05-14 00:07:57.151524 | orchestrator | + echo '# Status of Elasticsearch' 2025-05-14 00:07:57.151537 | orchestrator | + echo 2025-05-14 00:07:57.151550 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2025-05-14 00:07:57.337464 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 11; active_shards: 27; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=11 'active'=27 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2025-05-14 00:07:57.337726 | orchestrator | 2025-05-14 00:07:57.337750 | orchestrator | # Status of MariaDB 2025-05-14 00:07:57.337763 | orchestrator | 2025-05-14 00:07:57.337774 | orchestrator | + echo 2025-05-14 00:07:57.337786 | orchestrator | + echo '# Status of MariaDB' 2025-05-14 00:07:57.337797 | orchestrator | + echo 2025-05-14 00:07:57.337808 | orchestrator | + MARIADB_USER=root_shard_0 2025-05-14 00:07:57.337820 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2025-05-14 00:07:57.397564 | orchestrator | Reading package lists... 2025-05-14 00:07:57.718466 | orchestrator | Building dependency tree... 2025-05-14 00:07:57.719135 | orchestrator | Reading state information... 2025-05-14 00:07:58.102309 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2025-05-14 00:07:58.102416 | orchestrator | bc set to manually installed. 2025-05-14 00:07:58.102432 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2025-05-14 00:07:58.781546 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2025-05-14 00:07:58.782445 | orchestrator | 2025-05-14 00:07:58.782466 | orchestrator | # Status of Prometheus 2025-05-14 00:07:58.782475 | orchestrator | 2025-05-14 00:07:58.782482 | orchestrator | + echo 2025-05-14 00:07:58.782490 | orchestrator | + echo '# Status of Prometheus' 2025-05-14 00:07:58.782498 | orchestrator | + echo 2025-05-14 00:07:58.782505 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2025-05-14 00:07:58.831743 | orchestrator | Unauthorized 2025-05-14 00:07:58.835147 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2025-05-14 00:07:58.906701 | orchestrator | Unauthorized 2025-05-14 00:07:58.910135 | orchestrator | 2025-05-14 00:07:58.910237 | orchestrator | # Status of RabbitMQ 2025-05-14 00:07:58.910254 | orchestrator | 2025-05-14 00:07:58.910264 | orchestrator | + echo 2025-05-14 00:07:58.910275 | orchestrator | + echo '# Status of RabbitMQ' 2025-05-14 00:07:58.910285 | orchestrator | + echo 2025-05-14 00:07:58.910295 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2025-05-14 00:07:59.409089 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2025-05-14 00:07:59.419718 | orchestrator | 2025-05-14 00:07:59.419844 | orchestrator | # Status of Redis 2025-05-14 00:07:59.419873 | orchestrator | 2025-05-14 00:07:59.419885 | orchestrator | + echo 2025-05-14 00:07:59.419924 | orchestrator | + echo '# Status of Redis' 2025-05-14 00:07:59.419946 | orchestrator | + echo 2025-05-14 00:07:59.419968 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2025-05-14 00:07:59.422751 | orchestrator | TCP OK - 0.001 second response time on 192.168.16.10 port 6379|time=0.001348s;;;0.000000;10.000000 2025-05-14 00:07:59.423311 | orchestrator | 2025-05-14 00:07:59.423388 | orchestrator | # Create backup of MariaDB database 2025-05-14 00:07:59.423400 | orchestrator | 2025-05-14 00:07:59.423408 | orchestrator | + popd 2025-05-14 00:07:59.423415 | orchestrator | + echo 2025-05-14 00:07:59.423422 | orchestrator | + echo '# Create backup of MariaDB database' 2025-05-14 00:07:59.423429 | orchestrator | + echo 2025-05-14 00:07:59.423436 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2025-05-14 00:08:01.231619 | orchestrator | 2025-05-14 00:08:01 | INFO  | Task 7ff9138b-2235-4cf4-878d-5b59468ccad8 (mariadb_backup) was prepared for execution. 2025-05-14 00:08:01.231719 | orchestrator | 2025-05-14 00:08:01 | INFO  | It takes a moment until task 7ff9138b-2235-4cf4-878d-5b59468ccad8 (mariadb_backup) has been started and output is visible here. 2025-05-14 00:08:05.036963 | orchestrator | 2025-05-14 00:08:05.041820 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 00:08:05.041866 | orchestrator | 2025-05-14 00:08:05.043558 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-14 00:08:05.045033 | orchestrator | Wednesday 14 May 2025 00:08:05 +0000 (0:00:00.191) 0:00:00.191 ********* 2025-05-14 00:08:05.234385 | orchestrator | ok: [testbed-node-0] 2025-05-14 00:08:05.365861 | orchestrator | ok: [testbed-node-1] 2025-05-14 00:08:05.366759 | orchestrator | ok: [testbed-node-2] 2025-05-14 00:08:05.368356 | orchestrator | 2025-05-14 00:08:05.369687 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-14 00:08:05.370639 | orchestrator | Wednesday 14 May 2025 00:08:05 +0000 (0:00:00.329) 0:00:00.521 ********* 2025-05-14 00:08:05.958214 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-05-14 00:08:05.960104 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-05-14 00:08:05.961542 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-05-14 00:08:05.963384 | orchestrator | 2025-05-14 00:08:05.964069 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-05-14 00:08:05.964402 | orchestrator | 2025-05-14 00:08:05.965158 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-05-14 00:08:05.965574 | orchestrator | Wednesday 14 May 2025 00:08:05 +0000 (0:00:00.591) 0:00:01.113 ********* 2025-05-14 00:08:06.391754 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-14 00:08:06.393155 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-05-14 00:08:06.393994 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-05-14 00:08:06.395095 | orchestrator | 2025-05-14 00:08:06.396097 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-14 00:08:06.396835 | orchestrator | Wednesday 14 May 2025 00:08:06 +0000 (0:00:00.432) 0:00:01.545 ********* 2025-05-14 00:08:06.993185 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 00:08:06.996794 | orchestrator | 2025-05-14 00:08:06.996860 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2025-05-14 00:08:06.996874 | orchestrator | Wednesday 14 May 2025 00:08:06 +0000 (0:00:00.602) 0:00:02.148 ********* 2025-05-14 00:08:10.332097 | orchestrator | ok: [testbed-node-1] 2025-05-14 00:08:10.335333 | orchestrator | ok: [testbed-node-0] 2025-05-14 00:08:10.336480 | orchestrator | ok: [testbed-node-2] 2025-05-14 00:08:10.337571 | orchestrator | 2025-05-14 00:08:10.338285 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2025-05-14 00:08:10.338991 | orchestrator | Wednesday 14 May 2025 00:08:10 +0000 (0:00:03.334) 0:00:05.483 ********* 2025-05-14 00:09:02.801709 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-05-14 00:09:02.801852 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2025-05-14 00:09:02.801881 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-05-14 00:09:02.801994 | orchestrator | mariadb_bootstrap_restart 2025-05-14 00:09:02.879866 | orchestrator | skipping: [testbed-node-1] 2025-05-14 00:09:02.884364 | orchestrator | skipping: [testbed-node-2] 2025-05-14 00:09:02.887687 | orchestrator | changed: [testbed-node-0] 2025-05-14 00:09:02.888448 | orchestrator | 2025-05-14 00:09:02.889112 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-05-14 00:09:02.889755 | orchestrator | skipping: no hosts matched 2025-05-14 00:09:02.890274 | orchestrator | 2025-05-14 00:09:02.894564 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-05-14 00:09:02.894618 | orchestrator | skipping: no hosts matched 2025-05-14 00:09:02.894632 | orchestrator | 2025-05-14 00:09:02.895234 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-05-14 00:09:02.896608 | orchestrator | skipping: no hosts matched 2025-05-14 00:09:02.896974 | orchestrator | 2025-05-14 00:09:02.897499 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-05-14 00:09:02.898426 | orchestrator | 2025-05-14 00:09:02.898558 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-05-14 00:09:02.898998 | orchestrator | Wednesday 14 May 2025 00:09:02 +0000 (0:00:52.553) 0:00:58.037 ********* 2025-05-14 00:09:03.043847 | orchestrator | skipping: [testbed-node-0] 2025-05-14 00:09:03.159261 | orchestrator | skipping: [testbed-node-1] 2025-05-14 00:09:03.159535 | orchestrator | skipping: [testbed-node-2] 2025-05-14 00:09:03.160032 | orchestrator | 2025-05-14 00:09:03.160984 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-05-14 00:09:03.161725 | orchestrator | Wednesday 14 May 2025 00:09:03 +0000 (0:00:00.279) 0:00:58.317 ********* 2025-05-14 00:09:03.446675 | orchestrator | skipping: [testbed-node-0] 2025-05-14 00:09:03.486115 | orchestrator | skipping: [testbed-node-1] 2025-05-14 00:09:03.486545 | orchestrator | skipping: [testbed-node-2] 2025-05-14 00:09:03.487424 | orchestrator | 2025-05-14 00:09:03.489048 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 00:09:03.490149 | orchestrator | 2025-05-14 00:09:03 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-14 00:09:03.490378 | orchestrator | 2025-05-14 00:09:03 | INFO  | Please wait and do not abort execution. 2025-05-14 00:09:03.493642 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 00:09:03.493673 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-14 00:09:03.494304 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-14 00:09:03.498216 | orchestrator | 2025-05-14 00:09:03.498284 | orchestrator | 2025-05-14 00:09:03.498298 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 00:09:03.498697 | orchestrator | Wednesday 14 May 2025 00:09:03 +0000 (0:00:00.324) 0:00:58.641 ********* 2025-05-14 00:09:03.498973 | orchestrator | =============================================================================== 2025-05-14 00:09:03.499559 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 52.55s 2025-05-14 00:09:03.502861 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.34s 2025-05-14 00:09:03.505599 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.60s 2025-05-14 00:09:03.505630 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.59s 2025-05-14 00:09:03.505642 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.43s 2025-05-14 00:09:03.505653 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.33s 2025-05-14 00:09:03.505665 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.32s 2025-05-14 00:09:03.505676 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.28s 2025-05-14 00:09:03.858847 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=incremental 2025-05-14 00:09:05.582353 | orchestrator | 2025-05-14 00:09:05 | INFO  | Task e25ef456-b13f-4d55-b32c-964e206ee57e (mariadb_backup) was prepared for execution. 2025-05-14 00:09:05.582472 | orchestrator | 2025-05-14 00:09:05 | INFO  | It takes a moment until task e25ef456-b13f-4d55-b32c-964e206ee57e (mariadb_backup) has been started and output is visible here. 2025-05-14 00:09:09.735081 | orchestrator | 2025-05-14 00:09:09.736450 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 00:09:09.737083 | orchestrator | 2025-05-14 00:09:09.737639 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-14 00:09:09.739690 | orchestrator | Wednesday 14 May 2025 00:09:09 +0000 (0:00:00.202) 0:00:00.202 ********* 2025-05-14 00:09:09.922510 | orchestrator | ok: [testbed-node-0] 2025-05-14 00:09:10.044973 | orchestrator | ok: [testbed-node-1] 2025-05-14 00:09:10.045447 | orchestrator | ok: [testbed-node-2] 2025-05-14 00:09:10.046454 | orchestrator | 2025-05-14 00:09:10.051020 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-14 00:09:10.051120 | orchestrator | Wednesday 14 May 2025 00:09:10 +0000 (0:00:00.314) 0:00:00.516 ********* 2025-05-14 00:09:10.611368 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-05-14 00:09:10.612577 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-05-14 00:09:10.613001 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-05-14 00:09:10.614114 | orchestrator | 2025-05-14 00:09:10.614926 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-05-14 00:09:10.615740 | orchestrator | 2025-05-14 00:09:10.616267 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-05-14 00:09:10.617668 | orchestrator | Wednesday 14 May 2025 00:09:10 +0000 (0:00:00.566) 0:00:01.082 ********* 2025-05-14 00:09:11.043806 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-14 00:09:11.048496 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-05-14 00:09:11.050089 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-05-14 00:09:11.050729 | orchestrator | 2025-05-14 00:09:11.051193 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-14 00:09:11.051611 | orchestrator | Wednesday 14 May 2025 00:09:11 +0000 (0:00:00.428) 0:00:01.511 ********* 2025-05-14 00:09:11.554338 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 00:09:11.554752 | orchestrator | 2025-05-14 00:09:11.555969 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2025-05-14 00:09:11.559733 | orchestrator | Wednesday 14 May 2025 00:09:11 +0000 (0:00:00.513) 0:00:02.025 ********* 2025-05-14 00:09:14.839922 | orchestrator | ok: [testbed-node-1] 2025-05-14 00:09:14.840023 | orchestrator | ok: [testbed-node-2] 2025-05-14 00:09:14.841560 | orchestrator | ok: [testbed-node-0] 2025-05-14 00:09:14.843596 | orchestrator | 2025-05-14 00:09:14.844578 | orchestrator | TASK [mariadb : Taking incremental database backup via Mariabackup] ************ 2025-05-14 00:09:14.845564 | orchestrator | Wednesday 14 May 2025 00:09:14 +0000 (0:00:03.278) 0:00:05.303 ********* 2025-05-14 00:09:19.634596 | orchestrator | skipping: [testbed-node-1] 2025-05-14 00:09:19.634720 | orchestrator | skipping: [testbed-node-2] 2025-05-14 00:09:19.637523 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "Container exited with non-zero return code 139", "rc": 139, "stderr": "INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json\nINFO:__main__:Validating config file\nINFO:__main__:Kolla config strategy set to: COPY_ALWAYS\nINFO:__main__:Copying /etc/mysql/my.cnf to /etc/kolla/defaults/etc/mysql/my.cnf\nINFO:__main__:Copying permissions from /etc/mysql/my.cnf onto /etc/kolla/defaults/etc/mysql/my.cnf\nINFO:__main__:Copying service configuration files\nINFO:__main__:Deleting /etc/mysql/my.cnf\nINFO:__main__:Copying /var/lib/kolla/config_files/my.cnf to /etc/mysql/my.cnf\nINFO:__main__:Setting permission for /etc/mysql/my.cnf\nINFO:__main__:Writing out command to execute\nINFO:__main__:Setting permission for /var/log/kolla/mariadb\nINFO:__main__:Setting permission for /backup\n[00] 2025-05-14 00:09:18 Connecting to MariaDB server host: 192.168.16.11, user: backup_shard_0, password: set, port: 3306, socket: not set\n[00] 2025-05-14 00:09:18 Using server version 10.11.12-MariaDB-deb12-log\nmariabackup based on MariaDB server 10.11.12-MariaDB debian-linux-gnu (x86_64)\n[00] 2025-05-14 00:09:18 incremental backup from 0 is enabled.\n[00] 2025-05-14 00:09:18 uses posix_fadvise().\n[00] 2025-05-14 00:09:18 cd to /var/lib/mysql/\n[00] 2025-05-14 00:09:18 open files limit requested 0, set to 1048576\n[00] 2025-05-14 00:09:18 mariabackup: using the following InnoDB configuration:\n[00] 2025-05-14 00:09:18 innodb_data_home_dir = \n[00] 2025-05-14 00:09:18 innodb_data_file_path = ibdata1:12M:autoextend\n[00] 2025-05-14 00:09:18 innodb_log_group_home_dir = ./\n[00] 2025-05-14 00:09:18 InnoDB: Using liburing\n2025-05-14 0:09:18 0 [Note] InnoDB: Number of transaction pools: 1\nmariabackup: io_uring_queue_init() failed with EPERM: sysctl kernel.io_uring_disabled has the value 2, or 1 and the user of the process is not a member of sysctl kernel.io_uring_group. (see man 2 io_uring_setup).\n2025-05-14 0:09:18 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF\n2025-05-14 0:09:18 0 [Note] InnoDB: Memory-mapped log (block size=512 bytes)\n250514 0:09:18 [ERROR] mariabackup got signal 11 ;\nSorry, we probably made a mistake, and this is a bug.\n\nYour assistance in bug reporting will enable us to fix this for the next release.\nTo report this bug, see https://mariadb.com/kb/en/reporting-bugs about how to report\na bug on https://jira.mariadb.org/.\n\nPlease include the information from the server start above, to the end of the\ninformation below.\n\nServer version: 10.11.12-MariaDB-deb12 source revision: cafd22db7970ce081bafd887359aa0a77cfb769d\n\nThe information page at https://mariadb.com/kb/en/how-to-produce-a-full-stack-trace-for-mariadbd/\ncontains instructions to obtain a better version of the backtrace below.\nFollowing these instructions will help MariaDB developers provide a fix quicker.\n\nAttempting backtrace. Include this in the bug report.\n(note: Retrieving this information may fail)\n\nThread pointer: 0x0\nstack_bottom = 0x0 thread_stack 0x49000\nPrinting to addr2line failed\nmariabackup(my_print_stacktrace+0x2e)[0x5c426f13839e]\nmariabackup(handle_fatal_signal+0x229)[0x5c426ec5b689]\n/lib/x86_64-linux-gnu/libc.so.6(+0x3c050)[0x711004ea7050]\nmariabackup(server_mysql_fetch_row+0x14)[0x5c426e8a7424]\nmariabackup(+0x76ca37)[0x5c426e879a37]\nmariabackup(+0x75f32a)[0x5c426e86c32a]\nmariabackup(main+0x163)[0x5c426e811003]\n/lib/x86_64-linux-gnu/libc.so.6(+0x2724a)[0x711004e9224a]\n/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x85)[0x711004e92305]\nmariabackup(_start+0x21)[0x5c426e856111]\nWriting a core file...\nWorking directory at /var/lib/mysql\nResource Limits (excludes unlimited resources):\nLimit Soft Limit Hard Limit Units \nMax stack size 8388608 unlimited bytes \nMax open files 1048576 1048576 files \nMax locked memory 8388608 8388608 bytes \nMax pending signals 128077 128077 signals \nMax msgqueue size 819200 819200 bytes \nMax nice priority 0 0 \nMax realtime priority 0 0 \nCore pattern: |/usr/share/apport/apport -p%p -s%s -c%c -d%d -P%P -u%u -g%g -- %E\n\nKernel version: Linux version 6.11.0-25-generic (buildd@lcy02-amd64-027) (x86_64-linux-gnu-gcc-13 (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0, GNU ld (GNU Binutils for Ubuntu) 2.42) #25~24.04.1-Ubuntu SMP PREEMPT_DYNAMIC Tue Apr 15 17:20:50 UTC 2\n\n/usr/local/bin/kolla_mariadb_backup_replica.sh: line 36: 44 Segmentation fault (core dumped) mariabackup --defaults-file=\"${REPLICA_MY_CNF}\" --backup --stream=mbstream --incremental-history-name=\"${LAST_FULL_DATE}\" --history=\"${LAST_FULL_DATE}\"\n 45 Done | gzip > \"${BACKUP_DIR}/incremental-$(date +%H)-mysqlbackup-${LAST_FULL_DATE}.qp.mbc.mbs.gz\"\n", "stderr_lines": ["INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json", "INFO:__main__:Validating config file", "INFO:__main__:Kolla config strategy set to: COPY_ALWAYS", "INFO:__main__:Copying /etc/mysql/my.cnf to /etc/kolla/defaults/etc/mysql/my.cnf", "INFO:__main__:Copying permissions from /etc/mysql/my.cnf onto /etc/kolla/defaults/etc/mysql/my.cnf", "INFO:__main__:Copying service configuration files", "INFO:__main__:Deleting /etc/mysql/my.cnf", "INFO:__main__:Copying /var/lib/kolla/config_files/my.cnf to /etc/mysql/my.cnf", "INFO:__main__:Setting permission for /etc/mysql/my.cnf", "INFO:__main__:Writing out command to execute", "INFO:__main__:Setting permission for /var/log/kolla/mariadb", "INFO:__main__:Setting permission for /backup", "[00] 2025-05-14 00:09:18 Connecting to MariaDB server host: 192.168.16.11, user: backup_shard_0, password: set, port: 3306, socket: not set", "[00] 2025-05-14 00:09:18 Using server version 10.11.12-MariaDB-deb12-log", "mariabackup based on MariaDB server 10.11.12-MariaDB debian-linux-gnu (x86_64)", "[00] 2025-05-14 00:09:18 incremental backup from 0 is enabled.", "[00] 2025-05-14 00:09:18 uses posix_fadvise().", "[00] 2025-05-14 00:09:18 cd to /var/lib/mysql/", "[00] 2025-05-14 00:09:18 open files limit requested 0, set to 1048576", "[00] 2025-05-14 00:09:18 mariabackup: using the following InnoDB configuration:", "[00] 2025-05-14 00:09:18 innodb_data_home_dir = ", "[00] 2025-05-14 00:09:18 innodb_data_file_path = ibdata1:12M:autoextend", "[00] 2025-05-14 00:09:18 innodb_log_group_home_dir = ./", "[00] 2025-05-14 00:09:18 InnoDB: Using liburing", "2025-05-14 0:09:18 0 [Note] InnoDB: Number of transaction pools: 1", "mariabackup: io_uring_queue_init() failed with EPERM: sysctl kernel.io_uring_disabled has the value 2, or 1 and the user of the process is not a member of sysctl kernel.io_uring_group. (see man 2 io_uring_setup).", "2025-05-14 0:09:18 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF", "2025-05-14 0:09:18 0 [Note] InnoDB: Memory-mapped log (block size=512 bytes)", "250514 0:09:18 [ERROR] mariabackup got signal 11 ;", "Sorry, we probably made a mistake, and this is a bug.", "", "Your assistance in bug reporting will enable us to fix this for the next release.", "To report this bug, see https://mariadb.com/kb/en/reporting-bugs about how to report", "a bug on https://jira.mariadb.org/.", "", "Please include the information from the server start above, to the end of the", "information below.", "", "Server version: 10.11.12-MariaDB-deb12 source revision: cafd22db7970ce081bafd887359aa0a77cfb769d", "", "The information page at https://mariadb.com/kb/en/how-to-produce-a-full-stack-trace-for-mariadbd/", "contains instructions to obtain a better version of the backtrace below.", "Following these instructions will help MariaDB developers provide a fix quicker.", "", "Attempting backtrace. Include this in the bug report.", "(note: Retrieving this information may fail)", "", "Thread pointer: 0x0", "stack_bottom = 0x0 thread_stack 0x49000", "Printing to addr2line failed", "mariabackup(my_print_stacktrace+0x2e)[0x5c426f13839e]", "mariabackup(handle_fatal_signal+0x229)[0x5c426ec5b689]", "/lib/x86_64-linux-gnu/libc.so.6(+0x3c050)[0x711004ea7050]", "mariabackup(server_mysql_fetch_row+0x14)[0x5c426e8a7424]", "mariabackup(+0x76ca37)[0x5c426e879a37]", "mariabackup(+0x75f32a)[0x5c426e86c32a]", "mariabackup(main+0x163)[0x5c426e811003]", "/lib/x86_64-linux-gnu/libc.so.6(+0x2724a)[0x711004e9224a]", "/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x85)[0x711004e92305]", "mariabackup(_start+0x21)[0x5c426e856111]", "Writing a core file...", "Working directory at /var/lib/mysql", "Resource Limits (excludes unlimited resources):", "Limit Soft Limit Hard Limit Units ", "Max stack size 8388608 unlimited bytes ", "Max open files 1048576 1048576 files ", "Max locked memory 8388608 8388608 bytes ", "Max pending signals 128077 128077 signals ", "Max msgqueue size 819200 819200 bytes ", "Max nice priority 0 0 ", "Max realtime priority 0 0 ", "Core pattern: |/usr/share/apport/apport -p%p -s%s -c%c -d%d -P%P -u%u -g%g -- %E", "", "Kernel version: Linux version 6.11.0-25-generic (buildd@lcy02-amd64-027) (x86_64-linux-gnu-gcc-13 (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0, GNU ld (GNU Binutils for Ubuntu) 2.42) #25~24.04.1-Ubuntu SMP PREEMPT_DYNAMIC Tue Apr 15 17:20:50 UTC 2", "", "/usr/local/bin/kolla_mariadb_backup_replica.sh: line 36: 44 Segmentation fault (core dumped) mariabackup --defaults-file=\"${REPLICA_MY_CNF}\" --backup --stream=mbstream --incremental-history-name=\"${LAST_FULL_DATE}\" --history=\"${LAST_FULL_DATE}\"", " 45 Done | gzip > \"${BACKUP_DIR}/incremental-$(date +%H)-mysqlbackup-${LAST_FULL_DATE}.qp.mbc.mbs.gz\""], "stdout": "Taking an incremental backup\n", "stdout_lines": ["Taking an incremental backup"]} 2025-05-14 00:09:19.785790 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-05-14 00:09:19.786926 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2025-05-14 00:09:19.788478 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-05-14 00:09:19.789387 | orchestrator | mariadb_bootstrap_restart 2025-05-14 00:09:19.858680 | orchestrator | 2025-05-14 00:09:19.862437 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-05-14 00:09:19.862489 | orchestrator | skipping: no hosts matched 2025-05-14 00:09:19.862504 | orchestrator | 2025-05-14 00:09:19.862603 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-05-14 00:09:19.863852 | orchestrator | skipping: no hosts matched 2025-05-14 00:09:19.865447 | orchestrator | 2025-05-14 00:09:19.867236 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-05-14 00:09:19.868363 | orchestrator | skipping: no hosts matched 2025-05-14 00:09:19.869222 | orchestrator | 2025-05-14 00:09:19.870536 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-05-14 00:09:19.871355 | orchestrator | 2025-05-14 00:09:19.872267 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-05-14 00:09:19.873173 | orchestrator | Wednesday 14 May 2025 00:09:19 +0000 (0:00:05.027) 0:00:10.331 ********* 2025-05-14 00:09:20.061962 | orchestrator | skipping: [testbed-node-1] 2025-05-14 00:09:20.062237 | orchestrator | skipping: [testbed-node-2] 2025-05-14 00:09:20.062734 | orchestrator | 2025-05-14 00:09:20.063023 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-05-14 00:09:20.063286 | orchestrator | Wednesday 14 May 2025 00:09:20 +0000 (0:00:00.204) 0:00:10.535 ********* 2025-05-14 00:09:20.178822 | orchestrator | skipping: [testbed-node-1] 2025-05-14 00:09:20.179964 | orchestrator | skipping: [testbed-node-2] 2025-05-14 00:09:20.180022 | orchestrator | 2025-05-14 00:09:20.182463 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 00:09:20.182514 | orchestrator | 2025-05-14 00:09:20 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-14 00:09:20.182594 | orchestrator | 2025-05-14 00:09:20 | INFO  | Please wait and do not abort execution. 2025-05-14 00:09:20.183487 | orchestrator | testbed-node-0 : ok=5  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-05-14 00:09:20.184392 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-14 00:09:20.185068 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-14 00:09:20.186487 | orchestrator | 2025-05-14 00:09:20.187465 | orchestrator | 2025-05-14 00:09:20.188422 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 00:09:20.188849 | orchestrator | Wednesday 14 May 2025 00:09:20 +0000 (0:00:00.115) 0:00:10.651 ********* 2025-05-14 00:09:20.190422 | orchestrator | =============================================================================== 2025-05-14 00:09:20.190834 | orchestrator | mariadb : Taking incremental database backup via Mariabackup ------------ 5.03s 2025-05-14 00:09:20.191776 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.28s 2025-05-14 00:09:20.192707 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.57s 2025-05-14 00:09:20.193481 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.51s 2025-05-14 00:09:20.193831 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.43s 2025-05-14 00:09:20.194337 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2025-05-14 00:09:20.195045 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.20s 2025-05-14 00:09:20.195336 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.12s 2025-05-14 00:09:20.471682 | orchestrator | 2025-05-14 00:09:20 | INFO  | Task 0ef752e4-7f99-4e33-b55a-8f29ab3209d7 (mariadb_backup) was prepared for execution. 2025-05-14 00:09:20.474230 | orchestrator | 2025-05-14 00:09:20 | INFO  | It takes a moment until task 0ef752e4-7f99-4e33-b55a-8f29ab3209d7 (mariadb_backup) has been started and output is visible here. 2025-05-14 00:09:24.163721 | orchestrator | 2025-05-14 00:09:24.163816 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 00:09:24.164345 | orchestrator | 2025-05-14 00:09:24.164382 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-14 00:09:24.165639 | orchestrator | Wednesday 14 May 2025 00:09:24 +0000 (0:00:00.175) 0:00:00.175 ********* 2025-05-14 00:09:24.338275 | orchestrator | ok: [testbed-node-0] 2025-05-14 00:09:24.446308 | orchestrator | ok: [testbed-node-1] 2025-05-14 00:09:24.446590 | orchestrator | ok: [testbed-node-2] 2025-05-14 00:09:24.450600 | orchestrator | 2025-05-14 00:09:24.450627 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-14 00:09:24.450635 | orchestrator | Wednesday 14 May 2025 00:09:24 +0000 (0:00:00.284) 0:00:00.459 ********* 2025-05-14 00:09:24.958712 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-05-14 00:09:24.958786 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-05-14 00:09:24.958949 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-05-14 00:09:24.961430 | orchestrator | 2025-05-14 00:09:24.962203 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-05-14 00:09:24.962408 | orchestrator | 2025-05-14 00:09:24.962723 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-05-14 00:09:24.963220 | orchestrator | Wednesday 14 May 2025 00:09:24 +0000 (0:00:00.511) 0:00:00.971 ********* 2025-05-14 00:09:25.329275 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-14 00:09:25.329348 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-05-14 00:09:25.329980 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-05-14 00:09:25.330263 | orchestrator | 2025-05-14 00:09:25.332603 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-14 00:09:25.334200 | orchestrator | Wednesday 14 May 2025 00:09:25 +0000 (0:00:00.369) 0:00:01.341 ********* 2025-05-14 00:09:25.863274 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 00:09:25.864185 | orchestrator | 2025-05-14 00:09:25.865393 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2025-05-14 00:09:25.866126 | orchestrator | Wednesday 14 May 2025 00:09:25 +0000 (0:00:00.533) 0:00:01.875 ********* 2025-05-14 00:09:29.176098 | orchestrator | ok: [testbed-node-0] 2025-05-14 00:09:29.178190 | orchestrator | ok: [testbed-node-1] 2025-05-14 00:09:29.180289 | orchestrator | ok: [testbed-node-2] 2025-05-14 00:09:29.180491 | orchestrator | 2025-05-14 00:09:29.183240 | orchestrator | TASK [mariadb : Taking incremental database backup via Mariabackup] ************ 2025-05-14 00:09:29.183282 | orchestrator | Wednesday 14 May 2025 00:09:29 +0000 (0:00:03.311) 0:00:05.186 ********* 2025-05-14 00:09:33.941801 | orchestrator | skipping: [testbed-node-1] 2025-05-14 00:09:33.941976 | orchestrator | skipping: [testbed-node-2] 2025-05-14 00:09:33.943080 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "Container exited with non-zero return code 139", "rc": 139, "stderr": "INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json\nINFO:__main__:Validating config file\nINFO:__main__:Kolla config strategy set to: COPY_ALWAYS\nINFO:__main__:Copying /etc/mysql/my.cnf to /etc/kolla/defaults/etc/mysql/my.cnf\nINFO:__main__:Copying permissions from /etc/mysql/my.cnf onto /etc/kolla/defaults/etc/mysql/my.cnf\nINFO:__main__:Copying service configuration files\nINFO:__main__:Deleting /etc/mysql/my.cnf\nINFO:__main__:Copying /var/lib/kolla/config_files/my.cnf to /etc/mysql/my.cnf\nINFO:__main__:Setting permission for /etc/mysql/my.cnf\nINFO:__main__:Writing out command to execute\nINFO:__main__:Setting permission for /var/log/kolla/mariadb\nINFO:__main__:Setting permission for /backup\n[00] 2025-05-14 00:09:33 Connecting to MariaDB server host: 192.168.16.11, user: backup_shard_0, password: set, port: 3306, socket: not set\n[00] 2025-05-14 00:09:33 Using server version 10.11.12-MariaDB-deb12-log\nmariabackup based on MariaDB server 10.11.12-MariaDB debian-linux-gnu (x86_64)\n[00] 2025-05-14 00:09:33 incremental backup from 0 is enabled.\n[00] 2025-05-14 00:09:33 uses posix_fadvise().\n[00] 2025-05-14 00:09:33 cd to /var/lib/mysql/\n[00] 2025-05-14 00:09:33 open files limit requested 0, set to 1048576\n[00] 2025-05-14 00:09:33 mariabackup: using the following InnoDB configuration:\n[00] 2025-05-14 00:09:33 innodb_data_home_dir = \n[00] 2025-05-14 00:09:33 innodb_data_file_path = ibdata1:12M:autoextend\n[00] 2025-05-14 00:09:33 innodb_log_group_home_dir = ./\n[00] 2025-05-14 00:09:33 InnoDB: Using liburing\n2025-05-14 0:09:33 0 [Note] InnoDB: Number of transaction pools: 1\nmariabackup: io_uring_queue_init() failed with EPERM: sysctl kernel.io_uring_disabled has the value 2, or 1 and the user of the process is not a member of sysctl kernel.io_uring_group. (see man 2 io_uring_setup).\n2025-05-14 0:09:33 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF\n2025-05-14 0:09:33 0 [Note] InnoDB: Memory-mapped log (block size=512 bytes)\n250514 0:09:33 [ERROR] mariabackup got signal 11 ;\nSorry, we probably made a mistake, and this is a bug.\n\nYour assistance in bug reporting will enable us to fix this for the next release.\nTo report this bug, see https://mariadb.com/kb/en/reporting-bugs about how to report\na bug on https://jira.mariadb.org/.\n\nPlease include the information from the server start above, to the end of the\ninformation below.\n\nServer version: 10.11.12-MariaDB-deb12 source revision: cafd22db7970ce081bafd887359aa0a77cfb769d\n\nThe information page at https://mariadb.com/kb/en/how-to-produce-a-full-stack-trace-for-mariadbd/\ncontains instructions to obtain a better version of the backtrace below.\nFollowing these instructions will help MariaDB developers provide a fix quicker.\n\nAttempting backtrace. Include this in the bug report.\n(note: Retrieving this information may fail)\n\nThread pointer: 0x0\nstack_bottom = 0x0 thread_stack 0x49000\nPrinting to addr2line failed\nmariabackup(my_print_stacktrace+0x2e)[0x5d0b2b0aa39e]\nmariabackup(handle_fatal_signal+0x229)[0x5d0b2abcd689]\n/lib/x86_64-linux-gnu/libc.so.6(+0x3c050)[0x776fd440e050]\nmariabackup(server_mysql_fetch_row+0x14)[0x5d0b2a819424]\nmariabackup(+0x76ca37)[0x5d0b2a7eba37]\nmariabackup(+0x75f32a)[0x5d0b2a7de32a]\nmariabackup(main+0x163)[0x5d0b2a783003]\n/lib/x86_64-linux-gnu/libc.so.6(+0x2724a)[0x776fd43f924a]\n/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x85)[0x776fd43f9305]\nmariabackup(_start+0x21)[0x5d0b2a7c8111]\nWriting a core file...\nWorking directory at /var/lib/mysql\nResource Limits (excludes unlimited resources):\nLimit Soft Limit Hard Limit Units \nMax stack size 8388608 unlimited bytes \nMax open files 1048576 1048576 files \nMax locked memory 8388608 8388608 bytes \nMax pending signals 128077 128077 signals \nMax msgqueue size 819200 819200 bytes \nMax nice priority 0 0 \nMax realtime priority 0 0 \nCore pattern: |/usr/share/apport/apport -p%p -s%s -c%c -d%d -P%P -u%u -g%g -- %E\n\nKernel version: Linux version 6.11.0-25-generic (buildd@lcy02-amd64-027) (x86_64-linux-gnu-gcc-13 (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0, GNU ld (GNU Binutils for Ubuntu) 2.42) #25~24.04.1-Ubuntu SMP PREEMPT_DYNAMIC Tue Apr 15 17:20:50 UTC 2\n\n/usr/local/bin/kolla_mariadb_backup_replica.sh: line 36: 44 Segmentation fault (core dumped) mariabackup --defaults-file=\"${REPLICA_MY_CNF}\" --backup --stream=mbstream --incremental-history-name=\"${LAST_FULL_DATE}\" --history=\"${LAST_FULL_DATE}\"\n 45 Done | gzip > \"${BACKUP_DIR}/incremental-$(date +%H)-mysqlbackup-${LAST_FULL_DATE}.qp.mbc.mbs.gz\"\n", "stderr_lines": ["INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json", "INFO:__main__:Validating config file", "INFO:__main__:Kolla config strategy set to: COPY_ALWAYS", "INFO:__main__:Copying /etc/mysql/my.cnf to /etc/kolla/defaults/etc/mysql/my.cnf", "INFO:__main__:Copying permissions from /etc/mysql/my.cnf onto /etc/kolla/defaults/etc/mysql/my.cnf", "INFO:__main__:Copying service configuration files", "INFO:__main__:Deleting /etc/mysql/my.cnf", "INFO:__main__:Copying /var/lib/kolla/config_files/my.cnf to /etc/mysql/my.cnf", "INFO:__main__:Setting permission for /etc/mysql/my.cnf", "INFO:__main__:Writing out command to execute", "INFO:__main__:Setting permission for /var/log/kolla/mariadb", "INFO:__main__:Setting permission for /backup", "[00] 2025-05-14 00:09:33 Connecting to MariaDB server host: 192.168.16.11, user: backup_shard_0, password: set, port: 3306, socket: not set", "[00] 2025-05-14 00:09:33 Using server version 10.11.12-MariaDB-deb12-log", "mariabackup based on MariaDB server 10.11.12-MariaDB debian-linux-gnu (x86_64)", "[00] 2025-05-14 00:09:33 incremental backup from 0 is enabled.", "[00] 2025-05-14 00:09:33 uses posix_fadvise().", "[00] 2025-05-14 00:09:33 cd to /var/lib/mysql/", "[00] 2025-05-14 00:09:33 open files limit requested 0, set to 1048576", "[00] 2025-05-14 00:09:33 mariabackup: using the following InnoDB configuration:", "[00] 2025-05-14 00:09:33 innodb_data_home_dir = ", "[00] 2025-05-14 00:09:33 innodb_data_file_path = ibdata1:12M:autoextend", "[00] 2025-05-14 00:09:33 innodb_log_group_home_dir = ./", "[00] 2025-05-14 00:09:33 InnoDB: Using liburing", "2025-05-14 0:09:33 0 [Note] InnoDB: Number of transaction pools: 1", "mariabackup: io_uring_queue_init() failed with EPERM: sysctl kernel.io_uring_disabled has the value 2, or 1 and the user of the process is not a member of sysctl kernel.io_uring_group. (see man 2 io_uring_setup).", "2025-05-14 0:09:33 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF", "2025-05-14 0:09:33 0 [Note] InnoDB: Memory-mapped log (block size=512 bytes)", "250514 0:09:33 [ERROR] mariabackup got signal 11 ;", "Sorry, we probably made a mistake, and this is a bug.", "", "Your assistance in bug reporting will enable us to fix this for the next release.", "To report this bug, see https://mariadb.com/kb/en/reporting-bugs about how to report", "a bug on https://jira.mariadb.org/.", "", "Please include the information from the server start above, to the end of the", "information below.", "", "Server version: 10.11.12-MariaDB-deb12 source revision: cafd22db7970ce081bafd887359aa0a77cfb769d", "", "The information page at https://mariadb.com/kb/en/how-to-produce-a-full-stack-trace-for-mariadbd/", "contains instructions to obtain a better version of the backtrace below.", "Following these instructions will help MariaDB developers provide a fix quicker.", "", "Attempting backtrace. Include this in the bug report.", "(note: Retrieving this information may fail)", "", "Thread pointer: 0x0", "stack_bottom = 0x0 thread_stack 0x49000", "Printing to addr2line failed", "mariabackup(my_print_stacktrace+0x2e)[0x5d0b2b0aa39e]", "mariabackup(handle_fatal_signal+0x229)[0x5d0b2abcd689]", "/lib/x86_64-linux-gnu/libc.so.6(+0x3c050)[0x776fd440e050]", "mariabackup(server_mysql_fetch_row+0x14)[0x5d0b2a819424]", "mariabackup(+0x76ca37)[0x5d0b2a7eba37]", "mariabackup(+0x75f32a)[0x5d0b2a7de32a]", "mariabackup(main+0x163)[0x5d0b2a783003]", "/lib/x86_64-linux-gnu/libc.so.6(+0x2724a)[0x776fd43f924a]", "/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x85)[0x776fd43f9305]", "mariabackup(_start+0x21)[0x5d0b2a7c8111]", "Writing a core file...", "Working directory at /var/lib/mysql", "Resource Limits (excludes unlimited resources):", "Limit Soft Limit Hard Limit Units ", "Max stack size 8388608 unlimited bytes ", "Max open files 1048576 1048576 files ", "Max locked memory 8388608 8388608 bytes ", "Max pending signals 128077 128077 signals ", "Max msgqueue size 819200 819200 bytes ", "Max nice priority 0 0 ", "Max realtime priority 0 0 ", "Core pattern: |/usr/share/apport/apport -p%p -s%s -c%c -d%d -P%P -u%u -g%g -- %E", "", "Kernel version: Linux version 6.11.0-25-generic (buildd@lcy02-amd64-027) (x86_64-linux-gnu-gcc-13 (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0, GNU ld (GNU Binutils for Ubuntu) 2.42) #25~24.04.1-Ubuntu SMP PREEMPT_DYNAMIC Tue Apr 15 17:20:50 UTC 2", "", "/usr/local/bin/kolla_mariadb_backup_replica.sh: line 36: 44 Segmentation fault (core dumped) mariabackup --defaults-file=\"${REPLICA_MY_CNF}\" --backup --stream=mbstream --incremental-history-name=\"${LAST_FULL_DATE}\" --history=\"${LAST_FULL_DATE}\"", " 45 Done | gzip > \"${BACKUP_DIR}/incremental-$(date +%H)-mysqlbackup-${LAST_FULL_DATE}.qp.mbc.mbs.gz\""], "stdout": "Taking an incremental backup\n", "stdout_lines": ["Taking an incremental backup"]} 2025-05-14 00:09:34.100875 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-05-14 00:09:34.103000 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2025-05-14 00:09:34.105230 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-05-14 00:09:34.106518 | orchestrator | mariadb_bootstrap_restart 2025-05-14 00:09:34.184477 | orchestrator | 2025-05-14 00:09:34.184865 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-05-14 00:09:34.185988 | orchestrator | skipping: no hosts matched 2025-05-14 00:09:34.186578 | orchestrator | 2025-05-14 00:09:34.190454 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-05-14 00:09:34.190484 | orchestrator | skipping: no hosts matched 2025-05-14 00:09:34.190496 | orchestrator | 2025-05-14 00:09:34.190508 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-05-14 00:09:34.190519 | orchestrator | skipping: no hosts matched 2025-05-14 00:09:34.191046 | orchestrator | 2025-05-14 00:09:34.191113 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-05-14 00:09:34.191460 | orchestrator | 2025-05-14 00:09:34.192001 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-05-14 00:09:34.192535 | orchestrator | Wednesday 14 May 2025 00:09:34 +0000 (0:00:05.011) 0:00:10.198 ********* 2025-05-14 00:09:34.380999 | orchestrator | skipping: [testbed-node-1] 2025-05-14 00:09:34.381581 | orchestrator | skipping: [testbed-node-2] 2025-05-14 00:09:34.383224 | orchestrator | 2025-05-14 00:09:34.385546 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-05-14 00:09:34.385581 | orchestrator | Wednesday 14 May 2025 00:09:34 +0000 (0:00:00.196) 0:00:10.395 ********* 2025-05-14 00:09:34.514339 | orchestrator | skipping: [testbed-node-1] 2025-05-14 00:09:34.514442 | orchestrator | skipping: [testbed-node-2] 2025-05-14 00:09:34.514457 | orchestrator | 2025-05-14 00:09:34.515086 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 00:09:34.516839 | orchestrator | 2025-05-14 00:09:34 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-14 00:09:34.516870 | orchestrator | 2025-05-14 00:09:34 | INFO  | Please wait and do not abort execution. 2025-05-14 00:09:34.517394 | orchestrator | testbed-node-0 : ok=5  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-05-14 00:09:34.517592 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-14 00:09:34.518385 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-14 00:09:34.520494 | orchestrator | 2025-05-14 00:09:34.522425 | orchestrator | 2025-05-14 00:09:34.523610 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 00:09:34.524671 | orchestrator | Wednesday 14 May 2025 00:09:34 +0000 (0:00:00.133) 0:00:10.528 ********* 2025-05-14 00:09:34.525496 | orchestrator | =============================================================================== 2025-05-14 00:09:34.526198 | orchestrator | mariadb : Taking incremental database backup via Mariabackup ------------ 5.01s 2025-05-14 00:09:34.526721 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.31s 2025-05-14 00:09:34.527434 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.53s 2025-05-14 00:09:34.528005 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.51s 2025-05-14 00:09:34.528522 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.37s 2025-05-14 00:09:34.529628 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.28s 2025-05-14 00:09:34.530196 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.20s 2025-05-14 00:09:34.532082 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.13s 2025-05-14 00:09:35.275211 | orchestrator | ERROR 2025-05-14 00:09:35.275513 | orchestrator | { 2025-05-14 00:09:35.275576 | orchestrator | "delta": "0:03:53.849781", 2025-05-14 00:09:35.275613 | orchestrator | "end": "2025-05-14 00:09:35.203041", 2025-05-14 00:09:35.275647 | orchestrator | "msg": "non-zero return code", 2025-05-14 00:09:35.276153 | orchestrator | "rc": 2, 2025-05-14 00:09:35.276201 | orchestrator | "start": "2025-05-14 00:05:41.353260" 2025-05-14 00:09:35.276235 | orchestrator | } failure 2025-05-14 00:09:35.304250 | 2025-05-14 00:09:35.304356 | PLAY RECAP 2025-05-14 00:09:35.304415 | orchestrator | ok: 23 changed: 10 unreachable: 0 failed: 1 skipped: 3 rescued: 0 ignored: 0 2025-05-14 00:09:35.304493 | 2025-05-14 00:09:35.525669 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-05-14 00:09:35.528636 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-05-14 00:09:36.276080 | 2025-05-14 00:09:36.276253 | PLAY [Post output play] 2025-05-14 00:09:36.293038 | 2025-05-14 00:09:36.293194 | LOOP [stage-output : Register sources] 2025-05-14 00:09:36.361642 | 2025-05-14 00:09:36.361947 | TASK [stage-output : Check sudo] 2025-05-14 00:09:37.239890 | orchestrator | sudo: a password is required 2025-05-14 00:09:37.405282 | orchestrator | ok: Runtime: 0:00:00.020511 2025-05-14 00:09:37.421523 | 2025-05-14 00:09:37.421688 | LOOP [stage-output : Set source and destination for files and folders] 2025-05-14 00:09:37.460001 | 2025-05-14 00:09:37.460324 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-05-14 00:09:37.542878 | orchestrator | ok 2025-05-14 00:09:37.550788 | 2025-05-14 00:09:37.550955 | LOOP [stage-output : Ensure target folders exist] 2025-05-14 00:09:38.022549 | orchestrator | ok: "docs" 2025-05-14 00:09:38.023243 | 2025-05-14 00:09:38.280355 | orchestrator | ok: "artifacts" 2025-05-14 00:09:38.529668 | orchestrator | ok: "logs" 2025-05-14 00:09:38.546554 | 2025-05-14 00:09:38.546715 | LOOP [stage-output : Copy files and folders to staging folder] 2025-05-14 00:09:38.581339 | 2025-05-14 00:09:38.581629 | TASK [stage-output : Make all log files readable] 2025-05-14 00:09:38.863908 | orchestrator | ok 2025-05-14 00:09:38.872688 | 2025-05-14 00:09:38.872842 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-05-14 00:09:38.909871 | orchestrator | skipping: Conditional result was False 2025-05-14 00:09:38.926209 | 2025-05-14 00:09:38.926363 | TASK [stage-output : Discover log files for compression] 2025-05-14 00:09:38.951334 | orchestrator | skipping: Conditional result was False 2025-05-14 00:09:38.964567 | 2025-05-14 00:09:38.964739 | LOOP [stage-output : Archive everything from logs] 2025-05-14 00:09:39.015952 | 2025-05-14 00:09:39.016155 | PLAY [Post cleanup play] 2025-05-14 00:09:39.025862 | 2025-05-14 00:09:39.025979 | TASK [Set cloud fact (Zuul deployment)] 2025-05-14 00:09:39.095603 | orchestrator | ok 2025-05-14 00:09:39.106953 | 2025-05-14 00:09:39.107087 | TASK [Set cloud fact (local deployment)] 2025-05-14 00:09:39.142910 | orchestrator | skipping: Conditional result was False 2025-05-14 00:09:39.153298 | 2025-05-14 00:09:39.153457 | TASK [Clean the cloud environment] 2025-05-14 00:09:39.779487 | orchestrator | 2025-05-14 00:09:39 - clean up servers 2025-05-14 00:09:40.639236 | orchestrator | 2025-05-14 00:09:40 - testbed-manager 2025-05-14 00:09:41.751597 | orchestrator | 2025-05-14 00:09:41 - testbed-node-2 2025-05-14 00:09:41.841866 | orchestrator | 2025-05-14 00:09:41 - testbed-node-1 2025-05-14 00:09:41.931117 | orchestrator | 2025-05-14 00:09:41 - testbed-node-3 2025-05-14 00:09:42.045108 | orchestrator | 2025-05-14 00:09:42 - testbed-node-4 2025-05-14 00:09:42.137399 | orchestrator | 2025-05-14 00:09:42 - testbed-node-0 2025-05-14 00:09:42.227800 | orchestrator | 2025-05-14 00:09:42 - testbed-node-5 2025-05-14 00:09:42.488125 | orchestrator | 2025-05-14 00:09:42 - clean up keypairs 2025-05-14 00:09:42.507768 | orchestrator | 2025-05-14 00:09:42 - testbed 2025-05-14 00:09:42.538005 | orchestrator | 2025-05-14 00:09:42 - wait for servers to be gone 2025-05-14 00:09:53.831536 | orchestrator | 2025-05-14 00:09:53 - clean up ports 2025-05-14 00:09:54.064506 | orchestrator | 2025-05-14 00:09:54 - 192dee38-73fc-48be-9bde-82e56e6cf4b3 2025-05-14 00:09:56.716018 | orchestrator | 2025-05-14 00:09:56 - 4769eab7-1798-46c7-a109-5dac0eedcb98 2025-05-14 00:09:56.905548 | orchestrator | 2025-05-14 00:09:56 - 5be32905-1aca-4fc8-a27c-c2752e73b72a 2025-05-14 00:09:57.162146 | orchestrator | 2025-05-14 00:09:57 - 720b83e2-0c6a-4d00-bc18-b6721d1fabfd 2025-05-14 00:09:57.488362 | orchestrator | 2025-05-14 00:09:57 - 8a4a9ceb-a748-49a2-a878-0f869dd4c2a9 2025-05-14 00:09:57.689120 | orchestrator | 2025-05-14 00:09:57 - f33837cc-6428-4261-91c3-70c1d6502de9 2025-05-14 00:09:57.893067 | orchestrator | 2025-05-14 00:09:57 - f8495074-a76c-4b95-ab7d-d36cf8150f42 2025-05-14 00:09:58.856384 | orchestrator | 2025-05-14 00:09:58 - clean up volumes 2025-05-14 00:09:58.992049 | orchestrator | 2025-05-14 00:09:58 - testbed-volume-3-node-base 2025-05-14 00:09:59.039127 | orchestrator | 2025-05-14 00:09:59 - testbed-volume-5-node-base 2025-05-14 00:09:59.081289 | orchestrator | 2025-05-14 00:09:59 - testbed-volume-2-node-base 2025-05-14 00:09:59.123546 | orchestrator | 2025-05-14 00:09:59 - testbed-volume-4-node-base 2025-05-14 00:09:59.165703 | orchestrator | 2025-05-14 00:09:59 - testbed-volume-0-node-base 2025-05-14 00:09:59.208122 | orchestrator | 2025-05-14 00:09:59 - testbed-volume-manager-base 2025-05-14 00:09:59.250101 | orchestrator | 2025-05-14 00:09:59 - testbed-volume-1-node-base 2025-05-14 00:09:59.293721 | orchestrator | 2025-05-14 00:09:59 - testbed-volume-8-node-5 2025-05-14 00:09:59.335388 | orchestrator | 2025-05-14 00:09:59 - testbed-volume-1-node-4 2025-05-14 00:09:59.377686 | orchestrator | 2025-05-14 00:09:59 - testbed-volume-3-node-3 2025-05-14 00:09:59.421357 | orchestrator | 2025-05-14 00:09:59 - testbed-volume-4-node-4 2025-05-14 00:09:59.466972 | orchestrator | 2025-05-14 00:09:59 - testbed-volume-5-node-5 2025-05-14 00:09:59.513595 | orchestrator | 2025-05-14 00:09:59 - testbed-volume-0-node-3 2025-05-14 00:09:59.561401 | orchestrator | 2025-05-14 00:09:59 - testbed-volume-6-node-3 2025-05-14 00:09:59.603241 | orchestrator | 2025-05-14 00:09:59 - testbed-volume-2-node-5 2025-05-14 00:09:59.648605 | orchestrator | 2025-05-14 00:09:59 - testbed-volume-7-node-4 2025-05-14 00:09:59.688675 | orchestrator | 2025-05-14 00:09:59 - disconnect routers 2025-05-14 00:09:59.756734 | orchestrator | 2025-05-14 00:09:59 - testbed 2025-05-14 00:10:00.640751 | orchestrator | 2025-05-14 00:10:00 - clean up subnets 2025-05-14 00:10:00.681465 | orchestrator | 2025-05-14 00:10:00 - subnet-testbed-management 2025-05-14 00:10:00.861591 | orchestrator | 2025-05-14 00:10:00 - clean up networks 2025-05-14 00:10:01.026959 | orchestrator | 2025-05-14 00:10:01 - net-testbed-management 2025-05-14 00:10:01.294670 | orchestrator | 2025-05-14 00:10:01 - clean up security groups 2025-05-14 00:10:01.329482 | orchestrator | 2025-05-14 00:10:01 - testbed-node 2025-05-14 00:10:01.412334 | orchestrator | 2025-05-14 00:10:01 - testbed-management 2025-05-14 00:10:01.499985 | orchestrator | 2025-05-14 00:10:01 - clean up floating ips 2025-05-14 00:10:01.538435 | orchestrator | 2025-05-14 00:10:01 - 81.163.193.58 2025-05-14 00:10:01.920742 | orchestrator | 2025-05-14 00:10:01 - clean up routers 2025-05-14 00:10:02.006662 | orchestrator | 2025-05-14 00:10:02 - testbed 2025-05-14 00:10:02.720301 | orchestrator | ok: Runtime: 0:00:23.140992 2025-05-14 00:10:02.724772 | 2025-05-14 00:10:02.724944 | PLAY RECAP 2025-05-14 00:10:02.725081 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-05-14 00:10:02.725150 | 2025-05-14 00:10:02.876158 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-05-14 00:10:02.877623 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-05-14 00:10:03.597104 | 2025-05-14 00:10:03.597260 | PLAY [Cleanup play] 2025-05-14 00:10:03.615602 | 2025-05-14 00:10:03.615728 | TASK [Set cloud fact (Zuul deployment)] 2025-05-14 00:10:03.673573 | orchestrator | ok 2025-05-14 00:10:03.682987 | 2025-05-14 00:10:03.683140 | TASK [Set cloud fact (local deployment)] 2025-05-14 00:10:03.717676 | orchestrator | skipping: Conditional result was False 2025-05-14 00:10:03.732801 | 2025-05-14 00:10:03.732930 | TASK [Clean the cloud environment] 2025-05-14 00:10:04.877524 | orchestrator | 2025-05-14 00:10:04 - clean up servers 2025-05-14 00:10:05.447653 | orchestrator | 2025-05-14 00:10:05 - clean up keypairs 2025-05-14 00:10:05.464932 | orchestrator | 2025-05-14 00:10:05 - wait for servers to be gone 2025-05-14 00:10:05.546113 | orchestrator | 2025-05-14 00:10:05 - clean up ports 2025-05-14 00:10:05.619814 | orchestrator | 2025-05-14 00:10:05 - clean up volumes 2025-05-14 00:10:05.710530 | orchestrator | 2025-05-14 00:10:05 - disconnect routers 2025-05-14 00:10:05.735324 | orchestrator | 2025-05-14 00:10:05 - clean up subnets 2025-05-14 00:10:05.753839 | orchestrator | 2025-05-14 00:10:05 - clean up networks 2025-05-14 00:10:05.905275 | orchestrator | 2025-05-14 00:10:05 - clean up security groups 2025-05-14 00:10:05.925559 | orchestrator | 2025-05-14 00:10:05 - clean up floating ips 2025-05-14 00:10:05.944821 | orchestrator | 2025-05-14 00:10:05 - clean up routers 2025-05-14 00:10:06.272912 | orchestrator | ok: Runtime: 0:00:01.464266 2025-05-14 00:10:06.276931 | 2025-05-14 00:10:06.277094 | PLAY RECAP 2025-05-14 00:10:06.277216 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-05-14 00:10:06.277278 | 2025-05-14 00:10:06.405297 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-05-14 00:10:06.407818 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-05-14 00:10:07.159811 | 2025-05-14 00:10:07.159992 | PLAY [Base post-fetch] 2025-05-14 00:10:07.175685 | 2025-05-14 00:10:07.175834 | TASK [fetch-output : Set log path for multiple nodes] 2025-05-14 00:10:07.231001 | orchestrator | skipping: Conditional result was False 2025-05-14 00:10:07.238140 | 2025-05-14 00:10:07.238302 | TASK [fetch-output : Set log path for single node] 2025-05-14 00:10:07.278639 | orchestrator | ok 2025-05-14 00:10:07.284988 | 2025-05-14 00:10:07.285122 | LOOP [fetch-output : Ensure local output dirs] 2025-05-14 00:10:07.785918 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/909ac6d6933c43bb91e99e3e1a9563b8/work/logs" 2025-05-14 00:10:08.084551 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/909ac6d6933c43bb91e99e3e1a9563b8/work/artifacts" 2025-05-14 00:10:08.393886 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/909ac6d6933c43bb91e99e3e1a9563b8/work/docs" 2025-05-14 00:10:08.416672 | 2025-05-14 00:10:08.416859 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-05-14 00:10:09.389039 | orchestrator | changed: .d..t...... ./ 2025-05-14 00:10:09.389355 | orchestrator | changed: All items complete 2025-05-14 00:10:09.389419 | 2025-05-14 00:10:10.122898 | orchestrator | changed: .d..t...... ./ 2025-05-14 00:10:10.868008 | orchestrator | changed: .d..t...... ./ 2025-05-14 00:10:10.897824 | 2025-05-14 00:10:10.897987 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-05-14 00:10:10.933059 | orchestrator | skipping: Conditional result was False 2025-05-14 00:10:10.935812 | orchestrator | skipping: Conditional result was False 2025-05-14 00:10:10.958707 | 2025-05-14 00:10:10.958818 | PLAY RECAP 2025-05-14 00:10:10.958916 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-05-14 00:10:10.958955 | 2025-05-14 00:10:11.090092 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-05-14 00:10:11.092525 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-05-14 00:10:11.843485 | 2025-05-14 00:10:11.843644 | PLAY [Base post] 2025-05-14 00:10:11.858190 | 2025-05-14 00:10:11.858422 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-05-14 00:10:12.863372 | orchestrator | changed 2025-05-14 00:10:12.874530 | 2025-05-14 00:10:12.874663 | PLAY RECAP 2025-05-14 00:10:12.874744 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-05-14 00:10:12.874825 | 2025-05-14 00:10:12.994236 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-05-14 00:10:12.996619 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-05-14 00:10:13.772423 | 2025-05-14 00:10:13.772601 | PLAY [Base post-logs] 2025-05-14 00:10:13.783320 | 2025-05-14 00:10:13.783480 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-05-14 00:10:14.245181 | localhost | changed 2025-05-14 00:10:14.255534 | 2025-05-14 00:10:14.255679 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-05-14 00:10:14.284411 | localhost | ok 2025-05-14 00:10:14.291442 | 2025-05-14 00:10:14.291602 | TASK [Set zuul-log-path fact] 2025-05-14 00:10:14.309214 | localhost | ok 2025-05-14 00:10:14.320173 | 2025-05-14 00:10:14.320313 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-05-14 00:10:14.357657 | localhost | ok 2025-05-14 00:10:14.363732 | 2025-05-14 00:10:14.364032 | TASK [upload-logs : Create log directories] 2025-05-14 00:10:14.884768 | localhost | changed 2025-05-14 00:10:14.889728 | 2025-05-14 00:10:14.889889 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-05-14 00:10:15.435160 | localhost -> localhost | ok: Runtime: 0:00:00.007750 2025-05-14 00:10:15.443897 | 2025-05-14 00:10:15.444066 | TASK [upload-logs : Upload logs to log server] 2025-05-14 00:10:16.027750 | localhost | Output suppressed because no_log was given 2025-05-14 00:10:16.030371 | 2025-05-14 00:10:16.030537 | LOOP [upload-logs : Compress console log and json output] 2025-05-14 00:10:16.092663 | localhost | skipping: Conditional result was False 2025-05-14 00:10:16.097926 | localhost | skipping: Conditional result was False 2025-05-14 00:10:16.112359 | 2025-05-14 00:10:16.112633 | LOOP [upload-logs : Upload compressed console log and json output] 2025-05-14 00:10:16.160575 | localhost | skipping: Conditional result was False 2025-05-14 00:10:16.161176 | 2025-05-14 00:10:16.164701 | localhost | skipping: Conditional result was False 2025-05-14 00:10:16.177559 | 2025-05-14 00:10:16.177776 | LOOP [upload-logs : Upload console log and json output]